Skip to content

Commit 3e22288

Browse files
Merge pull request #431 from mrbullwinkle/mrb_09_20_2024_freshness
[Azure OpenAI] Freshness
2 parents a543183 + 6853e1b commit 3e22288

File tree

15 files changed

+33
-36
lines changed

15 files changed

+33
-36
lines changed

articles/ai-services/anomaly-detector/index.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ metadata:
1212
manager: nitinme
1313
ms.service: azure-ai-anomaly-detector
1414
ms.topic: landing-page
15-
ms.date: 01/18/2024
15+
ms.date: 09/20/2024
1616
ms.author: mbullwin
1717

1818

articles/ai-services/anomaly-detector/overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: mrbullwinkle
77
manager: nitinme
88
ms.service: azure-ai-anomaly-detector
99
ms.topic: overview
10-
ms.date: 01/18/2024
10+
ms.date: 09/20/2024
1111
ms.author: mbullwin
1212
keywords: anomaly detection, machine learning, algorithms
1313
---

articles/ai-services/openai/chatgpt-quickstart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ ms.custom: build-2023, build-2023-dataai, devx-track-python, devx-track-dotnet,
99
ms.topic: quickstart
1010
author: mrbullwinkle
1111
ms.author: mbullwin
12-
ms.date: 08/31/2023
12+
ms.date: 09/20/2024
1313
zone_pivot_groups: openai-quickstart-new
1414
recommendations: false
1515
---

articles/ai-services/openai/concepts/abuse-monitoring.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 04/30/2024
9+
ms.date: 09/20/2024
1010
ms.custom: template-concept
1111
manager: nitinme
1212
---

articles/ai-services/openai/concepts/customizing-llms.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,7 @@ title: Azure OpenAI Service getting started with customizing a large language mo
33
titleSuffix: Azure OpenAI Service
44
description: Learn more about the concepts behind customizing an LLM with Azure OpenAI.
55
ms.topic: conceptual
6-
ms.date: 03/26/2024
6+
ms.date: 09/20/2024
77
ms.service: azure-ai-openai
88
manager: nitinme
99
author: mrbullwinkle
@@ -76,9 +76,9 @@ Fine-tuning requires the use of high-quality training data, in a [special exampl
7676

7777
### Illustrative use case
7878

79-
An IT department has been using GPT-4 to convert natural language queries to SQL, but they have found that the responses are not always reliably grounded in their schema, and the cost is prohibitively high.
79+
An IT department has been using GPT-4o to convert natural language queries to SQL, but they have found that the responses are not always reliably grounded in their schema, and the cost is prohibitively high.
8080

81-
They fine-tune GPT-3.5-Turbo with hundreds of requests and correct responses and produce a model that performs better than the base model with lower costs and latency.
81+
They fine-tune GPT-4o mini with hundreds of requests and correct responses and produce a model that performs better than the base model with lower costs and latency.
8282

8383
### Things to consider
8484

@@ -90,13 +90,13 @@ They fine-tune GPT-3.5-Turbo with hundreds of requests and correct responses and
9090

9191
- Fine-tuning costs:
9292

93-
- Fine-tuning can reduce costs across two dimensions: (1) by using fewer tokens depending on the task (2) by using a smaller model (for example GPT 3.5 Turbo can potentially be fine-tuned to achieve the same quality of GPT-4 on a particular task).
93+
- Fine-tuning can reduce costs across two dimensions: (1) by using fewer tokens depending on the task (2) by using a smaller model (for example GPT-4o mini can potentially be fine-tuned to achieve the same quality of GPT-4o on a particular task).
9494

9595
- Fine-tuning has upfront costs for training the model. And additional hourly costs for hosting the custom model once it's deployed.
9696

9797
### Getting started
9898

9999
- [When to use Azure OpenAI fine-tuning](./fine-tuning-considerations.md)
100100
- [Customize a model with fine-tuning](../how-to/fine-tuning.md)
101-
- [Azure OpenAI GPT 3.5 Turbo fine-tuning tutorial](../tutorials/fine-tune.md)
101+
- [Azure OpenAI GPT-4o Turbo fine-tuning tutorial](../tutorials/fine-tune.md)
102102
- [To fine-tune or not to fine-tune? (Video)](https://www.youtube.com/watch?v=0Jo-z-MFxJs)

articles/ai-services/openai/concepts/model-versions.md

Lines changed: 2 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI
44
description: Learn about model versions in Azure OpenAI.
55
ms.service: azure-ai-openai
66
ms.topic: conceptual
7-
ms.date: 10/30/2023
7+
ms.date: 09/20/2024
88
manager: nitinme
99
author: mrbullwinkle #ChrisHMSFT
1010
ms.author: mbullwin #chrhoder
@@ -15,15 +15,11 @@ recommendations: false
1515

1616
Azure OpenAI Service is committed to providing the best generative AI models for customers. As part of this commitment, Azure OpenAI Service regularly releases new model versions to incorporate the latest features and improvements from OpenAI.
1717

18-
In particular, the GPT-3.5 Turbo and GPT-4 models see regular updates with new features. For example, versions 0613 of GPT-3.5 Turbo and GPT-4 introduced function calling. Function calling is a popular feature that allows the model to create structured outputs that can be used to call external tools.
19-
2018
## How model versions work
2119

2220
We want to make it easy for customers to stay up to date as models improve. Customers can choose to start with a particular version and to automatically update as new versions are released.
2321

24-
When a customer deploys GPT-3.5-Turbo and GPT-4 on Azure OpenAI Service, the standard behavior is to deploy the current default version – for example, GPT-4 version 0314. When the default version changes to say GPT-4 version 0613, the deployment is automatically updated to version 0613 so that customer deployments feature the latest capabilities of the model.
25-
26-
Customers can also deploy a specific version like GPT-4 0613 and choose an update policy, which can include the following options:
22+
When you deploy a model you can choose an update policy, which can include the following options:
2723

2824
* Deployments set to **Auto-update to default** automatically update to use the new default version.
2925
* Deployments set to **Upgrade when expired** automatically update when its current version is retired.

articles/ai-services/openai/concepts/red-teaming.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI Service
44
description: Learn about how red teaming and adversarial testing are an essential practice in the responsible development of systems and features using large language models (LLMs)
55
ms.service: azure-ai-openai
66
ms.topic: conceptual
7-
ms.date: 11/03/2023
7+
ms.date: 09/20/2023
88
manager: nitinme
99
author: mrbullwinkle
1010
ms.author: mbullwin

articles/ai-services/openai/concepts/system-message.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ titleSuffix: Azure OpenAI Service
44
description: Learn about how to construct system messages also know as metaprompts to guide an AI system's behavior.
55
ms.service: azure-ai-openai
66
ms.topic: conceptual
7-
ms.date: 03/26/2024
7+
ms.date: 09/20/2024
88
ms.custom:
99
- ignite-2023
1010
manager: nitinme
@@ -69,7 +69,7 @@ Here are some examples of lines you can include:
6969

7070
## Provide examples to demonstrate the intended behavior of the model
7171

72-
When using the system message to demonstrate the intended behavior of the model in your scenario, it is helpful to provide specific examples. When providing examples, consider the following:
72+
When using the system message to demonstrate the intended behavior of the model in your scenario, it's helpful to provide specific examples. When providing examples, consider the following:
7373

7474
- **Describe difficult use cases** where the prompt is ambiguous or complicated, to give the model more visibility into how to approach such cases.
7575

@@ -166,7 +166,7 @@ Here are some examples of lines you can include to potentially mitigate differen
166166

167167
Indirect attacks, also referred to as Indirect Prompt Attacks, or Cross Domain Prompt Injection Attacks, are a type of prompt injection technique where malicious instructions are hidden in the ancillary documents that are fed into Generative AI Models. We’ve found system messages to be an effective mitigation for these attacks, by way of spotlighting.
168168

169-
**Spotlighting** is a family of techniques that helps large language models (LLMs) distinguish between valid system instructions and potentially untrustworthy external inputs. It is based on the idea of transforming the input text in a way that makes it more salient to the model, while preserving its semantic content and task performance.
169+
**Spotlighting** is a family of techniques that helps large language models (LLMs) distinguish between valid system instructions and potentially untrustworthy external inputs. It's based on the idea of transforming the input text in a way that makes it more salient to the model, while preserving its semantic content and task performance.
170170

171171
- **Delimiters** are a natural starting point to help mitigate indirect attacks. Including delimiters in your system message helps to explicitly demarcate the location of the input text in the system message. You can choose one or more special tokens to prepend and append the input text, and the model will be made aware of this boundary. By using delimiters, the model will only handle documents if they contain the appropriate delimiters, which reduces the success rate of indirect attacks. However, since delimiters can be subverted by clever adversaries, we recommend you continue on to the other spotlighting approaches.
172172

@@ -182,7 +182,7 @@ Below is an example of a potential system message, for a retail company deployin
182182

183183
:::image type="content" source="../media/concepts/system-message/template.png" alt-text="Screenshot of metaprompts influencing a chatbot conversation." lightbox="../media/concepts/system-message/template.png":::
184184

185-
Finally, remember that system messages, or metaprompts, are not "one size fits all." Use of these type of examples has varying degrees of success in different applications. It is important to try different wording, ordering, and structure of system message text to reduce identified harms, and to test the variations to see what works best for a given scenario.
185+
Finally, remember that system messages, or metaprompts, are not "one size fits all." Use of these type of examples has varying degrees of success in different applications. It's important to try different wording, ordering, and structure of system message text to reduce identified harms, and to test the variations to see what works best for a given scenario.
186186

187187
## Next steps
188188

articles/ai-services/openai/how-to/latency.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ description: Learn about performance and latency with Azure OpenAI
55
manager: nitinme
66
ms.service: azure-ai-openai
77
ms.topic: how-to
8-
ms.date: 02/07/2024
8+
ms.date: 09/20/2024
99
author: mrbullwinkle
1010
ms.author: mbullwin
1111
recommendations: false
@@ -52,7 +52,7 @@ There are several factors that you can control to improve per-call latency of yo
5252

5353
### Model selection
5454

55-
Latency varies based on what model you're using. For an identical request, expect that different models have different latencies for the chat completions call. If your use case requires the lowest latency models with the fastest response times, we recommend the latest models in the [GPT-3.5 Turbo model series](../concepts/models.md#gpt-35-models).
55+
Latency varies based on what model you're using. For an identical request, expect that different models have different latencies for the chat completions call. If your use case requires the lowest latency models with the fastest response times, we recommend the latest [GPT-4o mini model](../concepts/models.md).
5656

5757
### Generation size and Max tokens
5858

@@ -128,7 +128,7 @@ Time from the first token to the last token, divided by the number of generated
128128

129129
## Summary
130130

131-
* **Model latency**: If model latency is important to you, we recommend trying out our latest models in the [GPT-3.5 Turbo model series](../concepts/models.md).
131+
* **Model latency**: If model latency is important to you, we recommend trying out the [GPT-4o mini model](../concepts/models.md).
132132

133133
* **Lower max tokens**: OpenAI has found that even in cases where the total number of tokens generated is similar the request with the higher value set for the max token parameter will have more latency.
134134

articles/ai-services/openai/how-to/migration.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,13 +7,13 @@ ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.custom: devx-track-python
99
ms.topic: how-to
10-
ms.date: 02/26/2024
10+
ms.date: 09/26/2024
1111
manager: nitinme
1212
---
1313

1414
# Migrating to the OpenAI Python API library 1.x
1515

16-
OpenAI has just released a new version of the [OpenAI Python API library](https://github.com/openai/openai-python/). This guide is supplemental to [OpenAI's migration guide](https://github.com/openai/openai-python/discussions/742) and will help bring you up to speed on the changes specific to Azure OpenAI.
16+
OpenAI released a new version of the [OpenAI Python API library](https://github.com/openai/openai-python/). This guide is supplemental to [OpenAI's migration guide](https://github.com/openai/openai-python/discussions/742) and will help bring you up to speed on the changes specific to Azure OpenAI.
1717

1818
## Updates
1919

0 commit comments

Comments
 (0)