Skip to content

Commit 6596e1e

Browse files
committed
update
1 parent bd5ff28 commit 6596e1e

File tree

2 files changed

+23
-1
lines changed

2 files changed

+23
-1
lines changed

articles/ai-services/openai/how-to/prompt-caching.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -22,6 +22,7 @@ Caches are typically cleared within 5-10 minutes of inactivity and are always re
2222

2323
Currently only the following models support prompt caching with Azure OpenAI:
2424

25+
- `o1-2024-12-17`
2526
- `o1-preview-2024-09-12`
2627
- `o1-mini-2024-09-12`
2728
- `gpt-4o-2024-05-13`

articles/ai-services/openai/whats-new.md

Lines changed: 22 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ ms.custom:
1111
- references_regions
1212
- ignite-2024
1313
ms.topic: whats-new
14-
ms.date: 11/17/2024
14+
ms.date: 11/18/2024
1515
recommendations: false
1616
---
1717

@@ -21,12 +21,33 @@ This article provides a summary of the latest releases and major documentation u
2121

2222
## December 2024
2323

24+
### o1 reasoning model released for limited access
25+
26+
The latest `o1` model is now available for API access and model deployment. **Registration is required, and access will be granted based on Microsoft's eligibility criteria**. Customers who previously applied and received access to `o1-preview`, don't need to reapply as they are automatically on the wait-list for the latest model.
27+
28+
Request access: [limited access model application](https://aka.ms/OAI/o1access)
29+
30+
To learn more about the advanced `o1` series models see, [getting started with o1 series reasoning models](../how-to/reasoning.md).
31+
32+
### Region availability
33+
34+
| Model | Region |
35+
|---|---|
36+
|`o1` | East US2 (Global Standard) <br> Sweden Central (Global Standard) |
37+
| `o1-preview` | See the [models table](#global-standard-model-availability). |
38+
| `o1-mini` | See the [models table](#global-provisioned-managed-model-availability). |
39+
40+
41+
2442
### Preference fine-tuning (preview)
2543

2644
[Direct preference optimization (DPO)](./how-to/fine-tuning.md#direct-preference-optimization-dpo-preview) is a new alignment technique for large language models, designed to adjust model weights based on human preferences. Unlike reinforcement learning from human feedback (RLHF), DPO does not require fitting a reward model and uses simpler data (binary preferences) for training. This method is computationally lighter and faster, making it equally effective at alignment while being more efficient. DPO is especially useful in scenarios where subjective elements like tone, style, or specific content preferences are important. We’re excited to announce the public preview of DPO in Azure OpenAI Service, starting with the `gpt-4o-2024-08-06` model.
2745

2846
For fine-tuning model region availability, see the [models page](./concepts/models.md#fine-tuning-models).
2947

48+
### Stored completions & distillation
49+
50+
[Stored completions](./how-to/stored-completions.md) allow you to capture the conversation history from chat completions sessions to use as datasets for [evaluations](./evaluations.md) and [fine-tuning](./fine-tuning.md).
3051

3152
### GPT-4o 2024-11-20
3253

0 commit comments

Comments
 (0)