Skip to content

Commit 962334b

Browse files
committed
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into postRel
2 parents fa928e1 + 5c7add3 commit 962334b

File tree

129 files changed

+3525
-987
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+3525
-987
lines changed

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 09/15/2023
9+
ms.date: 11/06/2023
1010
ms.custom: template-concept
1111
manager: nitinme
1212
keywords:
@@ -294,11 +294,10 @@ When annotations are enabled as shown in the code snippet below, the following i
294294

295295
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
296296

297-
# [Python](#tab/python)
297+
# [OpenAI Python 0.28.1](#tab/python)
298298

299299

300300
```python
301-
# Note: The openai-python library support for Azure OpenAI is in preview.
302301
# os.getenv() for the endpoint and key assumes that you are using environment variables.
303302

304303
import os
@@ -387,7 +386,6 @@ print(response)
387386
The following code snippet shows how to retrieve annotations when content was filtered:
388387

389388
```python
390-
# Note: The openai-python library support for Azure OpenAI is in preview.
391389
# os.getenv() for the endpoint and key assumes that you are using environment variables.
392390

393391
import os
@@ -416,6 +414,29 @@ except openai.error.InvalidRequestError as e:
416414

417415
```
418416

417+
# [OpenAI Python 1.x](#tab/python-new)
418+
419+
```python
420+
# os.getenv() for the endpoint and key assumes that you are using environment variables.
421+
422+
import os
423+
from openai import AzureOpenAI
424+
client = AzureOpenAI(
425+
api_key=os.getenv("AZURE_OPENAI_KEY"),
426+
api_version="2023-10-01-preview",
427+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
428+
)
429+
430+
response = client.completions.create(
431+
model="gpt-35-turbo-instruct", # model = "deployment_name".
432+
prompt="{Example prompt where a severity level of low is detected}"
433+
# Content that is detected at severity level medium or high is filtered,
434+
# while content detected at severity level low isn't filtered by the content filters.
435+
)
436+
437+
print(response.model_dump_json(indent=2))
438+
```
439+
419440
# [JavaScript](#tab/javascrit)
420441

421442
[Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)

articles/ai-services/openai/tutorials/embeddings.md

Lines changed: 82 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: tutorial
9-
ms.date: 09/12/2023
9+
ms.date: 11/06/2023
1010
author: mrbullwinkle #noabenefraim
1111
ms.author: mbullwin
1212
recommendations: false
@@ -46,10 +46,20 @@ In this tutorial, you learn how to:
4646

4747
If you haven't already, you need to install the following libraries:
4848

49+
# [OpenAI Python 0.28.1](#tab/python)
50+
4951
```cmd
5052
pip install "openai==0.28.1" num2words matplotlib plotly scipy scikit-learn pandas tiktoken
5153
```
5254

55+
# [OpenAI Python 1.x](#tab/python-new)
56+
57+
```console
58+
pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
59+
```
60+
61+
---
62+
5363
<!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
5464

5565
### Download the BillSum dataset
@@ -105,7 +115,9 @@ Run the following code in your preferred Python IDE:
105115

106116
<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
107117

108-
## Import libraries and list models
118+
## Import libraries
119+
120+
# [OpenAI Python 0.28.1](#tab/python)
109121

110122
```python
111123
import openai
@@ -193,6 +205,23 @@ print(r.text)
193205

194206
The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
195207

208+
# [OpenAI Python 1.x](#tab/python-new)
209+
210+
```python
211+
import os
212+
import re
213+
import requests
214+
import sys
215+
from num2words import num2words
216+
import os
217+
import pandas as pd
218+
import numpy as np
219+
import tiktoken
220+
from openai import AzureOpenAI
221+
```
222+
223+
---
224+
196225
Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
197226

198227
```python
@@ -334,10 +363,29 @@ len(decode)
334363

335364
Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an [Azure Database to support Vector Search](../../../cosmos-db/mongodb/vcore/vector-search.md). As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
336365

366+
# [OpenAI Python 0.28.1](#tab/python)
367+
337368
```python
338369
df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
339370
```
340371

372+
# [OpenAI Python 1.x](#tab/python-new)
373+
374+
```python
375+
client = AzureOpenAI(
376+
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
377+
api_version = "2023-05-15",
378+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
379+
)
380+
381+
def generate_embeddings(text, model="text-embedding-ada-002"): # model = "deployment_name"
382+
return client.embeddings.create(input = [text], model=model).data[0].embedding
383+
384+
df_bills['ada_v2'] = df_bills["text"].apply(lambda x : generate_embeddings (x, model = 'text-embedding-ada-002')) # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
385+
```
386+
387+
---
388+
341389
```python
342390
df_bills
343391
```
@@ -348,6 +396,8 @@ df_bills
348396

349397
As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
350398

399+
# [OpenAI Python 0.28.1](#tab/python)
400+
351401
```python
352402
# search through the reviews for a specific product
353403
def search_docs(df, user_query, top_n=3, to_print=True):
@@ -369,6 +419,36 @@ def search_docs(df, user_query, top_n=3, to_print=True):
369419
res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
370420
```
371421

422+
# [OpenAI Python 1.x](#tab/python-new)
423+
424+
```python
425+
def cosine_similarity(a, b):
426+
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
427+
428+
def get_embedding(text, model="text-embedding-ada-002"): # model = "deployment_name"
429+
return client.embeddings.create(input = [text], model=model).data[0].embedding
430+
431+
def search_docs(df, user_query, top_n=4, to_print=True):
432+
embedding = get_embedding(
433+
user_query,
434+
model="text-embedding-ada-002" # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
435+
)
436+
df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
437+
438+
res = (
439+
df.sort_values("similarities", ascending=False)
440+
.head(top_n)
441+
)
442+
if to_print:
443+
display(res)
444+
return res
445+
446+
447+
res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
448+
```
449+
450+
---
451+
372452
**Output**:
373453

374454
:::image type="content" source="../media/tutorials/query-result.png" alt-text="Screenshot of the formatted results of res once the search query has been run." lightbox="../media/tutorials/query-result.png":::

articles/ai-services/speech-service/includes/release-notes/release-notes-tts.md

Lines changed: 6 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,12 @@ ms.date: 02/28/2023
66
ms.author: eur
77
---
88

9+
### November 2023 release
10+
11+
#### Custom neural voice
12+
13+
- Added support for the 24 new locales for cross-lingual voice. See the [full language list](../../language-support.md?tabs=tts#custom-neural-voice) for more information.
14+
915
### October 2023 release
1016

1117
#### Custom neural voice

articles/ai-services/speech-service/speech-services-quotas-and-limits.md

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -48,8 +48,6 @@ You can use real-time speech to text with the [Speech SDK](speech-sdk.md) or the
4848
|--|--|--|
4949
| [Speech to text REST API](rest-speech-to-text.md) limit | Not available for F0 | 300 requests per minute |
5050
| Max audio input file size | N/A | 1 GB |
51-
| Max input blob size (for example, can contain more than one file in a zip archive). Note the file size limit from the preceding row. | N/A | 2.5 GB |
52-
| Max blob container size | N/A | 5 GB |
5351
| Max number of blobs per container | N/A | 10000 |
5452
| Max number of files per transcription request (when you're using multiple content URLs as input). | N/A | 1000 |
5553
| Max audio length for transcriptions with diarizaion enabled. | N/A | 240 minutes per file |

articles/api-management/api-management-features.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -49,6 +49,7 @@ Each API Management [pricing tier](https://aka.ms/apimpricing) offers a distinct
4949
| [Pass-through WebSocket APIs](websocket-api.md) | No | Yes | Yes | Yes | Yes |
5050
| [Pass-through GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
5151
| [Synthetic GraphQL APIs](graphql-apis-overview.md) | Yes | Yes | Yes | Yes | Yes |
52+
| [Pass-through gRPC APIs](grpc-api.md) (preview) | No | Yes | No | No | Yes |
5253

5354
<sup>1</sup> Enables the use of Microsoft Entra ID (and Azure AD B2C) as an identity provider for user sign in on the developer portal.<br/>
5455
<sup>2</sup> Including related functionality such as users, groups, issues, applications, and email templates and notifications.<br/>

articles/api-management/api-management-gateways-overview.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ author: dlepow
77

88
ms.service: api-management
99
ms.topic: conceptual
10-
ms.date: 09/18/2023
10+
ms.date: 11/6/2023
1111
ms.author: danlep
1212
---
1313

@@ -100,6 +100,7 @@ The following table compares features available in the managed gateway versus th
100100
| [Pass-through GraphQL](graphql-apis-overview.md) | ✔️ | ✔️ | ✔️ |
101101
| [Synthetic GraphQL](graphql-apis-overview.md)| ✔️ | ✔️<sup>1</sup> | ✔️<sup>1</sup> |
102102
| [Pass-through WebSocket](websocket-api.md) | ✔️ || ✔️ |
103+
| [Pass-through gRPC](grpc-api.md) ||| ✔️ |
103104

104105
<sup>1</sup> Synthetic GraphQL subscriptions (preview) aren't supported.
105106

articles/api-management/grpc-api.md

Lines changed: 76 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,76 @@
1+
---
2+
title: Import a gRPC API to Azure API Management (preview) | Microsoft Docs
3+
description: Learn how to import a gRPC service definition as an API to an API Management instance using the Azure portal, ARM template, or bicep template.
4+
services: api-management
5+
author: dlepow
6+
7+
ms.service: api-management
8+
ms.topic: how-to
9+
ms.date: 10/04/2023
10+
ms.author: danlep
11+
ms.custom:
12+
---
13+
# Import a gRPC API (preview)
14+
15+
This article shows how to import a gRPC service definition as an API in API Management. You can then manage the API in API Management, secure access and apply other polices, and pass gRPC API requests through the gateway to the gRPC backend.
16+
17+
To add a gRPC API to API Management, you need to:
18+
19+
* Upload the API's Protobuf (protocol buffer) definition file to API Management
20+
* Specify the location of your gRPC service
21+
* Configure the API in API Management
22+
23+
API Management supports pass-through with the following types of gRPC service methods: unary, server streaming, client streaming, and bidirectional streaming. For background about gRPC, see [Introduction to gRPC](https://grpc.io/docs/what-is-grpc/introduction/).
24+
25+
26+
> [!NOTE]
27+
> * Importing a gRPC API is in preview. Currently, gRPC APIs are only supported in the self-hosted gateway, not the managed gateway for your API Management instance.
28+
> * Currently, testing gRPC APIs isn't supported in the test console of the Azure portal or in the API Management developer portal.
29+
30+
[!INCLUDE [api-management-availability-premium-dev](../../includes/api-management-availability-premium-dev.md)]
31+
32+
## Prerequisites
33+
34+
* An API Management instance. If you don't already have one, complete the following quickstart: [Create an Azure API Management instance](get-started-create-service-instance.md).
35+
36+
* A gateway resource provisioned in your instance. If you don't already have one, see [Provision a self-hosted gateway in Azure API Management](api-management-howto-provision-self-hosted-gateway.md).
37+
38+
* A gRPC Protobuff (.proto) file available locally and gRPC service that's accessible over HTTPS.
39+
40+
## Add a gRPC API
41+
42+
#### [Portal](#tab/portal)
43+
44+
1. In the [Azure portal](https://portal.azure.com), navigate to your API Management instance.
45+
46+
1. In the left menu, select **APIs** > **+ Add API**.
47+
48+
1. Under **Define a new API**, select **gRPC**.
49+
50+
:::image type="content" source="./media/grpc-api/grpc-api.png" alt-text="Screenshot of creating a gRPC API in the portal." :::
51+
52+
1. In the **Create a gRPC API window**, select **Full**.
53+
54+
1. For a gRPC API, you must specify the following settings:
55+
56+
1. In **Upload schema**, select a local .proto file associated with the API to import.
57+
58+
1. In **gRPC server URL**, enter the address of the gRPC service. The address must be accessible over HTTPS.
59+
60+
1. In **Gateways**, select the gateway resource that you want to use to expose the API.
61+
62+
> [!IMPORTANT]
63+
> In public preview, you can only select a self-hosted gateway. The **Managed** gateway isn't supported.
64+
65+
1. Enter remaining settings to configure your API. These settings are explained in the [Import and publish your first API](import-and-publish.md#import-and-publish-a-backend-api) tutorial.
66+
67+
1. Select **Create**.
68+
69+
The API is added to the **APIs** list. You can view update your settings by going to the **Settings** tab of the API.
70+
71+
---
72+
73+
74+
[!INCLUDE [api-management-append-apis.md](../../includes/api-management-append-apis.md)]
75+
76+
[!INCLUDE [api-management-define-api-topics.md](../../includes/api-management-define-api-topics.md)]
9.06 KB
Loading

articles/application-gateway/for-containers/alb-controller-release-notes.md

Lines changed: 6 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: greglin
66
ms.service: application-gateway
77
ms.subservice: appgw-for-containers
88
ms.topic: article
9-
ms.date: 10/23/2023
9+
ms.date: 11/07/2023
1010
ms.author: greglin
1111
---
1212

@@ -23,10 +23,13 @@ Instructions for new or existing deployments of ALB Controller are found in the
2323
- [Upgrade existing ALB Controller](quickstart-deploy-application-gateway-for-containers-alb-controller.md#for-existing-deployments)
2424

2525
## Latest Release (Recommended)
26-
September 25, 2023 - 0.5.024542 - Custom Health Probes, Controller HA, Multi-site support for Ingress, [helm_release via Terraform fix](https://github.com/Azure/AKS/issues/3857), Path rewrite for Gateway API, status for Ingress resources, quality improvements
26+
November 6, 2023 - 0.6.1 - Gateway / Ingress API - Header rewrite support, Ingress API - URL rewrite support, Ingress multiple-TLS listener bug fix,
27+
two certificates maximum per host, adopting [semantic versioning (semver)](https://semver.org/), quality improvements
2728

2829
## Release history
29-
July 25, 2023 - 0.4.023971 - Ingress + Gateway co-existence improvements
30+
September 25, 2023 - 0.5.024542 - Custom Health Probes, Controller HA, Multi-site support for Ingress, [helm_release via Terraform fix](https://github.com/Azure/AKS/issues/3857), Path rewrite for Gateway API, status for Ingress resources, quality improvements
31+
32+
July 25, 2023 - 0.4.023971 - Ingress + Gateway coexistence improvements
3033

3134
July 24, 2023 - 0.4.023961 - Improved Ingress support
3235

0 commit comments

Comments
 (0)