Skip to content

Commit 3e7d4ec

Browse files
authored
Merge pull request #258034 from MicrosoftDocs/main
11/7/2023 PM Publish
2 parents 002ddfe + fea31af commit 3e7d4ec

File tree

231 files changed

+5217
-1985
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

231 files changed

+5217
-1985
lines changed

.openpublishing.publish.config.json

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -230,6 +230,12 @@
230230
"branch": "master",
231231
"branch_mapping": {}
232232
},
233+
{
234+
"path_to_root": "function-app-arm-templates",
235+
"url": "https://github.com/Azure-Samples/function-app-arm-templates",
236+
"branch": "main",
237+
"branch_mapping": {}
238+
},
233239
{
234240
"path_to_root": "functions-azure-product",
235241
"url": "https://github.com/Azure/Azure-Functions",

articles/ai-services/openai/how-to/migration.md

Lines changed: 186 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -28,7 +28,7 @@ OpenAI has just released a new version of the [OpenAI Python API library](https:
2828

2929
## Known issues
3030

31-
- The latest release of the [OpenAI Python library](https://pypi.org/project/openai/) doesn't currently support DALL-E when used with Azure OpenAI. DALL-E with Azure OpenAI is still supported with `0.28.1`.
31+
- The latest release of the [OpenAI Python library](https://pypi.org/project/openai/) doesn't currently support DALL-E when used with Azure OpenAI. DALL-E with Azure OpenAI is still supported with `0.28.1`. For those who can't wait for native support for DALL-E and Azure OpenAI we're providing [two code examples](#dall-e-fix) which can be used as a workaround.
3232
- `embeddings_utils.py` which was used to provide functionality like cosine similarity for semantic text search is [no longer part of the OpenAI Python API library](https://github.com/openai/openai-python/issues/676).
3333
- You should also check the active [GitHub Issues](https://github.com/openai/openai-python/issues/703) for the OpenAI Python library.
3434

@@ -252,6 +252,171 @@ completion = client.chat.completions.create(
252252
print(completion.model_dump_json(indent=2))
253253
```
254254

255+
## DALL-E fix
256+
257+
# [DALLE-Fix](#tab/dalle-fix)
258+
259+
```python
260+
import time
261+
import json
262+
import httpx
263+
import openai
264+
265+
266+
class CustomHTTPTransport(httpx.HTTPTransport):
267+
def handle_request(
268+
self,
269+
request: httpx.Request,
270+
) -> httpx.Response:
271+
if "images/generations" in request.url.path and request.url.params[
272+
"api-version"
273+
] in [
274+
"2023-06-01-preview",
275+
"2023-07-01-preview",
276+
"2023-08-01-preview",
277+
"2023-09-01-preview",
278+
"2023-10-01-preview",
279+
]:
280+
request.url = request.url.copy_with(path="/openai/images/generations:submit")
281+
response = super().handle_request(request)
282+
operation_location_url = response.headers["operation-location"]
283+
request.url = httpx.URL(operation_location_url)
284+
request.method = "GET"
285+
response = super().handle_request(request)
286+
response.read()
287+
288+
timeout_secs: int = 120
289+
start_time = time.time()
290+
while response.json()["status"] not in ["succeeded", "failed"]:
291+
if time.time() - start_time > timeout_secs:
292+
timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
293+
return httpx.Response(
294+
status_code=400,
295+
headers=response.headers,
296+
content=json.dumps(timeout).encode("utf-8"),
297+
request=request,
298+
)
299+
300+
time.sleep(int(response.headers.get("retry-after")) or 10)
301+
response = super().handle_request(request)
302+
response.read()
303+
304+
if response.json()["status"] == "failed":
305+
error_data = response.json()
306+
return httpx.Response(
307+
status_code=400,
308+
headers=response.headers,
309+
content=json.dumps(error_data).encode("utf-8"),
310+
request=request,
311+
)
312+
313+
result = response.json()["result"]
314+
return httpx.Response(
315+
status_code=200,
316+
headers=response.headers,
317+
content=json.dumps(result).encode("utf-8"),
318+
request=request,
319+
)
320+
return super().handle_request(request)
321+
322+
323+
client = openai.AzureOpenAI(
324+
azure_endpoint="<azure_endpoint>",
325+
api_key="<api_key>",
326+
api_version="<api_version>",
327+
http_client=httpx.Client(
328+
transport=CustomHTTPTransport(),
329+
),
330+
)
331+
image = client.images.generate(prompt="a cute baby seal")
332+
333+
print(image.data[0].url)
334+
```
335+
336+
# [DALLE-Fix Async](#tab/dalle-fix-async)
337+
338+
```python
339+
import time
340+
import asyncio
341+
import json
342+
import httpx
343+
import openai
344+
345+
346+
class AsyncCustomHTTPTransport(httpx.AsyncHTTPTransport):
347+
async def handle_async_request(
348+
self,
349+
request: httpx.Request,
350+
) -> httpx.Response:
351+
if "images/generations" in request.url.path and request.url.params[
352+
"api-version"
353+
] in [
354+
"2023-06-01-preview",
355+
"2023-07-01-preview",
356+
"2023-08-01-preview",
357+
"2023-09-01-preview",
358+
"2023-10-01-preview",
359+
]:
360+
request.url = request.url.copy_with(path="/openai/images/generations:submit")
361+
response = await super().handle_async_request(request)
362+
operation_location_url = response.headers["operation-location"]
363+
request.url = httpx.URL(operation_location_url)
364+
request.method = "GET"
365+
response = await super().handle_async_request(request)
366+
await response.aread()
367+
368+
timeout_secs: int = 120
369+
start_time = time.time()
370+
while response.json()["status"] not in ["succeeded", "failed"]:
371+
if time.time() - start_time > timeout_secs:
372+
timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
373+
return httpx.Response(
374+
status_code=400,
375+
headers=response.headers,
376+
content=json.dumps(timeout).encode("utf-8"),
377+
request=request,
378+
)
379+
380+
await asyncio.sleep(int(response.headers.get("retry-after")) or 10)
381+
response = await super().handle_async_request(request)
382+
await response.aread()
383+
384+
if response.json()["status"] == "failed":
385+
error_data = response.json()
386+
return httpx.Response(
387+
status_code=400,
388+
headers=response.headers,
389+
content=json.dumps(error_data).encode("utf-8"),
390+
request=request,
391+
)
392+
393+
result = response.json()["result"]
394+
return httpx.Response(
395+
status_code=200,
396+
headers=response.headers,
397+
content=json.dumps(result).encode("utf-8"),
398+
request=request,
399+
)
400+
return await super().handle_async_request(request)
401+
402+
403+
async def dall_e():
404+
client = openai.AsyncAzureOpenAI(
405+
azure_endpoint="<azure_endpoint>",
406+
api_key="<api_key>",
407+
api_version="<api_version>",
408+
http_client=httpx.AsyncClient(
409+
transport=AsyncCustomHTTPTransport(),
410+
),
411+
)
412+
image = await client.images.generate(prompt="a cute baby seal")
413+
414+
print(image.data[0].url)
415+
416+
asyncio.run(dall_e())
417+
```
418+
419+
---
255420
## Name changes
256421

257422
> [!NOTE]
@@ -260,7 +425,7 @@ print(completion.model_dump_json(indent=2))
260425
| OpenAI Python 0.28.1 | OpenAI Python 1.x |
261426
| --------------- | --------------- |
262427
| `openai.api_base` | `openai.base_url` |
263-
| `openai.proxy` | `openai.proxies (docs)` |
428+
| `openai.proxy` | `openai.proxies` |
264429
| `openai.InvalidRequestError` | `openai.BadRequestError` |
265430
| `openai.Audio.transcribe()` | `client.audio.transcriptions.create()` |
266431
| `openai.Audio.translate()` | `client.audio.translations.create()` |
@@ -296,22 +461,22 @@ print(completion.model_dump_json(indent=2))
296461

297462
### Removed
298463

299-
`openai.api_key_path`
300-
`openai.app_info`
301-
`openai.debug`
302-
`openai.log`
303-
`openai.OpenAIError`
304-
`openai.Audio.transcribe_raw()`
305-
`openai.Audio.translate_raw()`
306-
`openai.ErrorObject`
307-
`openai.Customer`
308-
`openai.api_version`
309-
`openai.verify_ssl_certs`
310-
`openai.api_type`
311-
`openai.enable_telemetry`
312-
`openai.ca_bundle_path`
313-
`openai.requestssession` (OpenAI now uses `httpx`)
314-
`openai.aiosession` (OpenAI now uses `httpx`)
315-
`openai.Deployment` (Previously used for Azure OpenAI)
316-
`openai.Engine`
317-
`openai.File.find_matching_files()`
464+
- `openai.api_key_path`
465+
- `openai.app_info`
466+
- `openai.debug`
467+
- `openai.log`
468+
- `openai.OpenAIError`
469+
- `openai.Audio.transcribe_raw()`
470+
- `openai.Audio.translate_raw()`
471+
- `openai.ErrorObject`
472+
- `openai.Customer`
473+
- `openai.api_version`
474+
- `openai.verify_ssl_certs`
475+
- `openai.api_type`
476+
- `openai.enable_telemetry`
477+
- `openai.ca_bundle_path`
478+
- `openai.requestssession` (OpenAI now uses `httpx`)
479+
- `openai.aiosession` (OpenAI now uses `httpx`)
480+
- `openai.Deployment` (Previously used for Azure OpenAI)
481+
- `openai.Engine`
482+
- `openai.File.find_matching_files()`

articles/ai-services/openai/how-to/working-with-models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ keywords:
1717

1818
Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. [Model availability varies by region](../concepts/models.md).
1919

20-
You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/cognitiveservices/azureopenaistable/models/list).
20+
You can get a list of models that are available for both inference and fine-tuning by your Azure OpenAI resource by using the [Models List API](/rest/api/azureopenai/models/list).
2121

2222
## Model updates
2323

articles/ai-services/openai/includes/fine-tuning-python.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -130,8 +130,8 @@ After it guides you through the process of implementing suggested changes, the t
130130

131131
The next step is to either choose existing prepared training data or upload new prepared training data to use when customizing your model. After you prepare your training data, you can upload your files to the service. There are two ways to upload training data:
132132

133-
- [From a local file](/rest/api/cognitiveservices/azureopenaistable/files/upload)
134-
- [Import from an Azure Blob store or other web location](/rest/api/cognitiveservices/azureopenaistable/files/import)
133+
- [From a local file](/rest/api/azureopenai/files/upload)
134+
- [Import from an Azure Blob store or other web location](/rest/api/azureopenai/files/import)
135135

136136
For large data files, we recommend that you import from an Azure Blob store. Large files can become unstable when uploaded through multipart forms because the requests are atomic and can't be retried or resumed. For more information about Azure Blob storage, see [What is Azure Blob storage?](../../../storage/blobs/storage-blobs-overview.md)
137137

@@ -209,7 +209,7 @@ print(response)
209209

210210
## Deploy a customized model
211211

212-
When the fine-tune job succeeds, the value of the `fine_tuned_model` variable in the response body is set to the name of your customized model. Your model is now also available for discovery from the [list Models API](/rest/api/cognitiveservices/azureopenaistable/models/list). However, you can't issue completion calls to your customized model until your customized model is deployed. You must deploy your customized model to make it available for use with completion calls.
212+
When the fine-tune job succeeds, the value of the `fine_tuned_model` variable in the response body is set to the name of your customized model. Your model is now also available for discovery from the [list Models API](/rest/api/azureopenai/models/list). However, you can't issue completion calls to your customized model until your customized model is deployed. You must deploy your customized model to make it available for use with completion calls.
213213

214214
[!INCLUDE [Fine-tuning deletion](fine-tune.md)]
215215

@@ -384,7 +384,7 @@ Similarly, you can use various methods to delete your customized model:
384384
You can optionally delete training and validation files that you uploaded for training, and result files generated during training, from your Azure OpenAI subscription. You can use the following methods to delete your training, validation, and result files:
385385

386386
- [Azure OpenAI Studio](../how-to/fine-tuning.md?pivots=programming-language-studio#delete-your-training-files)
387-
- The [REST APIs](/rest/api/cognitiveservices/azureopenaistable/files/delete)
387+
- The [REST APIs](/rest/api/azureopenai/files/delete)
388388
- The Python SDK
389389

390390
The following Python example uses the Python SDK to delete the training, validation, and result files for your customized model:

articles/ai-services/openai/reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -791,5 +791,5 @@ Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI servic
791791

792792
## Next steps
793793

794-
Learn about [ Models, and fine-tuning with the REST API](/rest/api/cognitiveservices/azureopenaistable/files).
794+
Learn about [ Models, and fine-tuning with the REST API](/rest/api/azureopenai/fine-tuning?view=rest-azureopenai-2023-10-01-preview).
795795
Learn more about the [underlying models that power Azure OpenAI](./concepts/models.md).

articles/ai-services/speech-service/batch-transcription-create.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ author: eric-urban
88
ms.author: eur
99
ms.service: azure-ai-speech
1010
ms.topic: how-to
11-
ms.date: 10/3/2023
11+
ms.date: 11/7/2023
1212
zone_pivot_groups: speech-cli-rest
1313
ms.custom: devx-track-csharp
1414
---
@@ -171,7 +171,7 @@ Here are some property options that you can use to configure a transcription whe
171171
|`contentContainerUrl`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information about Azure blob storage for batch transcription, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
172172
|`contentUrls`| You can submit individual audio files, or a whole storage container.<br/><br/>You must specify the audio data location via either the `contentContainerUrl` or `contentUrls` property. For more information, see [Locate audio files for batch transcription](batch-transcription-audio-data.md).<br/><br/>This property won't be returned in the response.|
173173
|`destinationContainerUrl`|The result can be stored in an Azure container. If you don't specify a container, the Speech service stores the results in a container managed by Microsoft. When the transcription job is deleted, the transcription result data is also deleted. For more information such as the supported security scenarios, see [Destination container URL](#destination-container-url).|
174-
|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|
174+
|`diarization`|Indicates that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains multiple voices. Specify the minimum and maximum number of people who might be speaking. You must also set the `diarizationEnabled` property to `true`. The [transcription file](batch-transcription-get.md#transcription-result-file) will contain a `speaker` entry for each transcribed phrase.<br/><br/>You need to use this property when you expect three or more speakers. For two speakers setting `diarizationEnabled` property to `true` is enough. See an example of the property usage in [Transcriptions_Create](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create) operation description.<br/><br/>Diarization is the process of separating speakers in audio data. The batch pipeline can recognize and separate multiple speakers on mono channel recordings. The maximum number of speakers for diarization must be less than 36 and more or equal to the `minSpeakers` property (see [example](https://eastus.dev.cognitive.microsoft.com/docs/services/speech-to-text-api-v3-1/operations/Transcriptions_Create)). The feature isn't available with stereo recordings.<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later. If you set this property with any previous version (such as version 3.0) then it will be ignored and only 2 speakers will be identified.|
175175
|`diarizationEnabled`|Specifies that diarization analysis should be carried out on the input, which is expected to be a mono channel that contains two voices. The default value is `false`.<br/><br/>For three or more voices you also need to use property `diarization` (only with Speech to text REST API version 3.1 and later).<br/><br/>When this property is selected, source audio length can't exceed 240 minutes per file.|
176176
|`displayName`|The name of the batch transcription. Choose a name that you can refer to later. The display name doesn't have to be unique.<br/><br/>This property is required.|
177177
|`displayFormWordLevelTimestampsEnabled`|Specifies whether to include word-level timestamps on the display form of the transcription results. The results are returned in the displayWords property of the transcription file. The default value is `false`.<br/><br/>**Note**: This property is only available with Speech to text REST API version 3.1 and later.|

articles/aks/TOC.yml

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -637,8 +637,11 @@
637637
- name: Upgrade
638638
items:
639639
- name: Upgrade options
640-
href: upgrade-cluster.md
641640
items:
641+
- name: Upgrade options
642+
href: upgrade-cluster.md
643+
- name: Stop cluster upgrades automatically on API breaking changes
644+
href: stop-cluster-upgrade-api-breaking-changes.md
642645
- name: Perform manual upgrades
643646
items:
644647
- name: Upgrade an AKS cluster
@@ -655,8 +658,6 @@
655658
href: auto-upgrade-cluster.md
656659
- name: Use Planned Maintenance to schedule and control upgrades
657660
href: planned-maintenance.md
658-
- name: Stop cluster upgrades automatically on API breaking changes
659-
href: stop-cluster-upgrade-api-breaking-changes.md
660661
- name: Automatically upgrade AKS cluster node operating system images
661662
href: auto-upgrade-node-image.md
662663
- name: Upgrade the node image automatically with GitHub Actions

0 commit comments

Comments
 (0)