Skip to content

Commit f2ca94b

Browse files
committed
edit
2 parents ee3da63 + ef25ffc commit f2ca94b

File tree

252 files changed

+4045
-1749
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

252 files changed

+4045
-1749
lines changed

.openpublishing.redirection.container-registry.json

Lines changed: 11 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@
109109
},
110110
{
111111
"source_path": "articles/container-registry/container-registry-diagnostics-audit-logs.md",
112-
"redirect_url": "/azure/container-registry/monitor-service"
112+
"redirect_url": "/azure/container-registry/monitor-container-registry"
113113
},
114114
{
115115
"source_path_from_root": "/articles/container-registry/container-registry-managed-get-started-azure-cli.md",
@@ -155,6 +155,16 @@
155155
"source_path_from_root": "/articles/container-registry/github-action-scan.md",
156156
"redirect_url": "/azure/developer/github/",
157157
"redirect_document_id": false
158+
},
159+
{
160+
"source_path": "articles/container-registry/monitor-service.md",
161+
"redirect_url": "/azure/container-registry/monitor-container-registry",
162+
"redirect_document_id": true
163+
},
164+
{
165+
"source_path": "articles/container-registry/monitor-service-reference.md",
166+
"redirect_url": "/azure/container-registry/monitor-container-registry-reference",
167+
"redirect_document_id": true
158168
}
159169
]
160170
}

.openpublishing.redirection.json

Lines changed: 17 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -4878,6 +4878,13 @@
48784878
"redirect_document_id": false
48794879
},
48804880
{
4881+
4882+
"source_path_from_root": "/articles/backup/azure-backup-move-vaults-across-regions.md",
4883+
"redirect_url": "/azure/operational-excellence/relocation-backup",
4884+
"redirect_document_id": false
4885+
},
4886+
{
4887+
48814888
"source_path_from_root": "/articles/cosmos-db/how-to-move-regions.md",
48824889
"redirect_url": "/azure/operational-excellence/relocation-cosmos-db",
48834890
"redirect_document_id": false
@@ -4931,6 +4938,16 @@
49314938
"source_path_from_root": "/articles/virtual-network/ip-services/create-vm-dual-stack-ipv6-powershell.md",
49324939
"redirect_url": "/azure/virtual-network/ip-services/create-vm-dual-stack-ipv6-portal",
49334940
"redirect_document_id": false
4941+
},
4942+
{
4943+
"source_path_from_root": "/articles/virtual-network/ip-services/routing-preference-powershell.md",
4944+
"redirect_url": "/azure/virtual-network/ip-services/routing-preference-portal",
4945+
"redirect_document_id": false
4946+
},
4947+
{
4948+
"source_path_from_root": "/articles/virtual-network/ip-services/routing-preference-cli.md",
4949+
"redirect_url": "/azure/virtual-network/ip-services/routing-preference-portal",
4950+
"redirect_document_id": false
49344951
}
49354952

49364953
]

articles/active-directory-b2c/custom-email-sendgrid.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -542,9 +542,9 @@ The Localization element allows you to support multiple locales or languages in
542542
<LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_send_new_code">Send new code</LocalizedString>
543543
<LocalizedString ElementType="DisplayControl" ElementId="emailVerificationControl" StringId="but_change_claims">Change e-mail</LocalizedString>
544544
<!-- Claims-->
545-
<LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="DisplayName">Verification Code</LocalizedString>
546-
<LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="UserHelpText">Verification code received in the email.</LocalizedString>
547-
<LocalizedString ElementType="ClaimType" ElementId="emailVerificationCode" StringId="AdminHelpText">Verification code received in the email.</LocalizedString>
545+
<LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="DisplayName">Verification Code</LocalizedString>
546+
<LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="UserHelpText">Verification code received in the email.</LocalizedString>
547+
<LocalizedString ElementType="ClaimType" ElementId="VerificationCode" StringId="AdminHelpText">Verification code received in the email.</LocalizedString>
548548
<LocalizedString ElementType="ClaimType" ElementId="email" StringId="DisplayName">Email</LocalizedString>
549549
<!-- Email validation error messages-->
550550
<LocalizedString ElementType="ErrorMessage" StringId="UserMessageIfSessionDoesNotExist">You have exceeded the maximum time allowed.</LocalizedString>

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 12 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -579,14 +579,18 @@ These estimates will vary based on the values set for the above parameters. For
579579

580580
The estimates also depend on the nature of the documents and questions being asked. For example, if the questions are open-ended, the responses are likely to be longer. Similarly, a longer system message would contribute to a longer prompt that consumes more tokens, and if the conversation history is long, the prompt will be longer.
581581

582-
| Model | Max tokens for system message | Max tokens for model response |
583-
|--|--|--|
584-
| GPT-35-0301 | 400 | 1500 |
585-
| GPT-35-0613-16K | 1000 | 3200 |
586-
| GPT-4-0613-8K | 400 | 1500 |
587-
| GPT-4-0613-32K | 2000 | 6400 |
588-
589-
The table above shows the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
582+
| Model | Max tokens for system message |
583+
|--|--|
584+
| GPT-35-0301 | 400 |
585+
| GPT-35-0613-16K | 1000 |
586+
| GPT-4-0613-8K | 400 |
587+
| GPT-4-0613-32K | 2000 |
588+
| GPT-35-turbo-0125 | 2000 |
589+
| GPT-4-turbo-0409 | 4000 |
590+
| GPT-4o | 4000 |
591+
| GPT-4o-mini | 4000 |
592+
593+
The table above shows the maximum number of tokens that can be used for the [system message](#system-message). To see the maximum tokens for the model response, see the [models article](./models.md#gpt-4-and-gpt-4-turbo-models). Additionally, the following also consume tokens:
590594

591595

592596

articles/ai-services/openai/how-to/deployment-types.md

Lines changed: 2 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -55,11 +55,9 @@ Standard deployments are optimized for low to medium volume workloads with high
5555
> [!IMPORTANT]
5656
> Data might be processed outside of the resource's Azure geography, but data storage remains in its Azure geography. [Learn more about data residency](https://azure.microsoft.com/explore/global-infrastructure/data-residency/).
5757
58-
Global deployments are available in the same Azure OpenAI resources as non-global offers but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard will provide the highest default quota for new models and eliminates the need to load balance across multiple resources.
58+
Global deployments are available in the same Azure OpenAI resources as non-global deployment types but allow you to leverage Azure's global infrastructure to dynamically route traffic to the data center with best availability for each request. Global standard provides the highest default quota and eliminates the need to load balance across multiple resources.
5959

60-
The deployment type is optimized for low to medium volume workloads with high burstiness. Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md).
61-
62-
For customers that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
60+
Customers with high consistent volume may experience greater latency variability. The threshold is set per model. See the [quotas page to learn more](./quota.md). For applications that require the lower latency variance at large workload usage, we recommend purchasing provisioned throughput.
6361

6462
### How to disable access to global deployments in your subscription
6563

articles/ai-services/openai/quotas-limits.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ The following sections provide you with a quick guide to the default quotas and
6161

6262
|Tier| Quota Limit in tokens per minute (TPM) | Requests per minute |
6363
|---|:---:|:---:|
64-
|Enterprise agreement | 30 M | 60 K |
64+
|Enterprise agreement | 30 M | 180 K |
6565
|Default | 450 K | 2.7 K |
6666

6767
M = million | K = thousand

articles/ai-services/speech-service/includes/how-to/speech-synthesis/python.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ In this example, use the `AudioDataStream` constructor to get a stream from the
8686
```python
8787
speech_synthesizer = speechsdk.SpeechSynthesizer(speech_config=speech_config, audio_config=None)
8888
result = speech_synthesizer.speak_text_async("I'm excited to try text to speech").get()
89-
stream = AudioDataStream(result)
89+
stream = speechsdk.AudioDataStream(result)
9090
```
9191

9292
At this point, you can implement any custom behavior by using the resulting `stream` object.

articles/ai-studio/concepts/fine-tuning-overview.md

Lines changed: 15 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -99,6 +99,7 @@ There isn't a single right answer to this question, but you should have clearly
9999

100100
Now that you know when to leverage fine-tuning for your use-case, you can go to Azure AI Studio to find several models available to fine-tune including:
101101
- Azure OpenAI models
102+
- Phi-3 family of models
102103
- Meta Llama 2 family models
103104
- Meta Llama 3.1 family of models
104105

@@ -119,22 +120,33 @@ The following Azure OpenAI models are supported in Azure AI Studio for fine-tuni
119120

120121
Please note for fine-tuning Azure OpenAI models, you must add a connection to an Azure OpenAI resource with a supported region to your project.
121122

123+
### Phi-3 family models
124+
The following Phi-3 family models are supported in Azure AI Studio for fine-tuning:
125+
- `Phi-3-mini-4k-instruct`
126+
- `Phi-3-mini-128k-instruct`
127+
- `Phi-3-medium-4k-instruct`
128+
- `Phi-3-medium-128k-instruct`
129+
130+
Fine-tuning of Phi-3 models is currently supported in projects located in East US2.
131+
122132
### Llama 2 family models
123133
The following Llama 2 family models are supported in Azure AI Studio for fine-tuning:
124134
- `Meta-Llama-2-70b`
125135
- `Meta-Llama-2-7b`
126136
- `Meta-Llama-2-13b`
127137

128-
Fine-tuning of Llama 2 models is currently supported in projects located in West US 3.
138+
Fine-tuning of Llama 2 models is currently supported in projects located in West US3.
129139

130140
### Llama 3.1 family models
131141
The following Llama 3.1 family models are supported in Azure AI Studio for fine-tuning:
132142
- `Meta-Llama-3.1-70b-Instruct`
133-
- `Meta-Llama-3.1-7b-Instruct`
143+
- `Meta-Llama-3.1-8b-Instruct`
134144

135-
Fine-tuning of Llama 3.1 models is currently supported in projects located in West US 3.
145+
Fine-tuning of Llama 3.1 models is currently supported in projects located in West US3.
136146

137147
## Related content
138148

139149
- [Learn how to fine-tune an Azure OpenAI model in Azure AI Studio](../../ai-services/openai/how-to/fine-tuning.md?context=/azure/ai-studio/context/context)
140150
- [Learn how to fine-tune a Llama 2 model in Azure AI Studio](../how-to/fine-tune-model-llama.md)
151+
- [Learn how to fine-tune a Phi-3 model in Azure AI Studio](../how-to/fine-tune-phi-3.md)
152+
- [How to deploy Phi-3 family of small language models with Azure AI Studio](../how-to/deploy-models-phi-3.md)

0 commit comments

Comments
 (0)