Skip to content

Commit 67c5ebd

Browse files
authored
Merge pull request #257846 from MicrosoftDocs/main
11/7 11:00 AM IST Publishing
2 parents 47fdffa + 26dc883 commit 67c5ebd

File tree

559 files changed

+3054
-6442
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

559 files changed

+3054
-6442
lines changed

.openpublishing.redirection.json

Lines changed: 70 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -783,6 +783,76 @@
783783
{
784784
"source_path": "articles/cognitive-services/bing-visual-search/quickstarts/ruby.md",
785785
"redirect_url": "/previous-versions/azure/cognitive-services/bing-visual-search/quickstarts/ruby",
786+
"redirect_document_id": false,
787+
},
788+
{
789+
"source_path": "articles/api-management/api-management-api-templates.md",
790+
"redirect_url": "/previous-versions/azure/api-management/api-management-api-templates",
791+
"redirect_document_id": false
792+
},
793+
{
794+
"source_path": "articles/api-management/api-management-application-templates.md",
795+
"redirect_url": "/previous-versions/azure/api-management/api-management-application-templates",
796+
"redirect_document_id": false
797+
},
798+
{
799+
"source_path": "articles/api-management/api-management-customize-styles.md",
800+
"redirect_url": "/previous-versions/azure/api-management/api-management-customize-styles",
801+
"redirect_document_id": false
802+
},
803+
{
804+
"source_path": "articles/api-management/api-management-developer-portal-templates-reference.md",
805+
"redirect_url": "/previous-versions/azure/api-management/api-management-developer-portal-templates-reference",
806+
"redirect_document_id": false
807+
},
808+
{
809+
"source_path": "articles/api-management/api-management-developer-portal-templates.md",
810+
"redirect_url": "/previous-versions/azure/api-management/api-management-developer-portal-templates",
811+
"redirect_document_id": false
812+
},
813+
{
814+
"source_path": "articles/api-management/api-management-issue-templates.md",
815+
"redirect_url": "/previous-versions/azure/api-management/api-management-issue-templates",
816+
"redirect_document_id": false
817+
},
818+
{
819+
"source_path": "articles/api-management/api-management-modify-content-layout.md",
820+
"redirect_url": "/previous-versions/azure/api-management/api-management-modify-content-layout",
821+
"redirect_document_id": false
822+
},
823+
{
824+
"source_path": "articles/api-management/api-management-page-controls.md",
825+
"redirect_url": "/previous-versions/azure/api-management/api-management-page-controls",
826+
"redirect_document_id": false
827+
},
828+
{
829+
"source_path": "articles/api-management/api-management-page-templates.md",
830+
"redirect_url": "/previous-versions/azure/api-management/api-management-page-templates",
831+
"redirect_document_id": false
832+
},
833+
{
834+
"source_path": "articles/api-management/api-management-product-templates.md",
835+
"redirect_url": "/previous-versions/azure/api-management/api-management-product-templates",
836+
"redirect_document_id": false
837+
},
838+
{
839+
"source_path": "articles/api-management/api-management-template-data-model-reference.md",
840+
"redirect_url": "/previous-versions/azure/api-management/api-management-template-data-model-reference",
841+
"redirect_document_id": false
842+
},
843+
{
844+
"source_path": "articles/api-management/api-management-template-resources.md",
845+
"redirect_url": "/previous-versions/azure/api-management/api-management-template-resources",
846+
"redirect_document_id": false
847+
},
848+
{
849+
"source_path": "articles/api-management/api-management-user-profile-templates.md",
850+
"redirect_url": "/previous-versions/azure/api-management/api-management-user-profile-templates",
851+
"redirect_document_id": false
852+
},
853+
{
854+
"source_path": "articles/api-management/developer-portal-deprecated-migration.md",
855+
"redirect_url": "/previous-versions/azure/api-management/developer-portal-deprecated-migration",
786856
"redirect_document_id": false
787857
},
788858
{

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 25 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle
66
ms.author: mbullwin
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 09/15/2023
9+
ms.date: 11/06/2023
1010
ms.custom: template-concept
1111
manager: nitinme
1212
keywords:
@@ -294,11 +294,10 @@ When annotations are enabled as shown in the code snippet below, the following i
294294

295295
Annotations are currently in preview for Completions and Chat Completions (GPT models); the following code snippet shows how to use annotations in preview:
296296

297-
# [Python](#tab/python)
297+
# [OpenAI Python 0.28.1](#tab/python)
298298

299299

300300
```python
301-
# Note: The openai-python library support for Azure OpenAI is in preview.
302301
# os.getenv() for the endpoint and key assumes that you are using environment variables.
303302

304303
import os
@@ -387,7 +386,6 @@ print(response)
387386
The following code snippet shows how to retrieve annotations when content was filtered:
388387

389388
```python
390-
# Note: The openai-python library support for Azure OpenAI is in preview.
391389
# os.getenv() for the endpoint and key assumes that you are using environment variables.
392390

393391
import os
@@ -416,6 +414,29 @@ except openai.error.InvalidRequestError as e:
416414

417415
```
418416

417+
# [OpenAI Python 1.x](#tab/python-new)
418+
419+
```python
420+
# os.getenv() for the endpoint and key assumes that you are using environment variables.
421+
422+
import os
423+
from openai import AzureOpenAI
424+
client = AzureOpenAI(
425+
api_key=os.getenv("AZURE_OPENAI_KEY"),
426+
api_version="2023-10-01-preview",
427+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
428+
)
429+
430+
response = client.completions.create(
431+
model="gpt-35-turbo-instruct", # model = "deployment_name".
432+
prompt="{Example prompt where a severity level of low is detected}"
433+
# Content that is detected at severity level medium or high is filtered,
434+
# while content detected at severity level low isn't filtered by the content filters.
435+
)
436+
437+
print(response.model_dump_json(indent=2))
438+
```
439+
419440
# [JavaScript](#tab/javascrit)
420441

421442
[Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)

articles/ai-services/openai/how-to/function-calling.md

Lines changed: 74 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ author: mrbullwinkle #dereklegenzoff
66
ms.author: mbullwin #delegenz
77
ms.service: azure-ai-openai
88
ms.topic: how-to
9-
ms.date: 07/20/2023
9+
ms.date: 11/06/2023
1010
manager: nitinme
1111
---
1212

@@ -29,8 +29,10 @@ To use function calling with the Chat Completions API, you need to include two n
2929

3030
When functions are provided, by default the `function_call` will be set to `"auto"` and the model will decide whether or not a function should be called. Alternatively, you can set the `function_call` parameter to `{"name": "<insert-function-name>"}` to force the API to call a specific function or you can set the parameter to `"none"` to prevent the model from calling any functions.
3131

32+
# [OpenAI Python 0.28.1](#tab/python)
33+
3234
```python
33-
# Note: The openai-python library support for Azure OpenAI is in preview.
35+
3436
import os
3537
import openai
3638

@@ -69,7 +71,7 @@ functions= [
6971
]
7072

7173
response = openai.ChatCompletion.create(
72-
engine="gpt-35-turbo-0613",
74+
engine="gpt-35-turbo-0613", # engine = "deployment_name"
7375
messages=messages,
7476
functions=functions,
7577
function_call="auto",
@@ -92,6 +94,74 @@ The response from the API includes a `function_call` property if the model deter
9294

9395
In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
9496

97+
# [OpenAI Python 1.x](#tab/python-new)
98+
99+
```python
100+
import os
101+
from openai import AzureOpenAI
102+
103+
client = AzureOpenAI(
104+
api_key=os.getenv("AZURE_OPENAI_KEY"),
105+
api_version="2023-10-01-preview",
106+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"
107+
)
108+
109+
messages= [
110+
{"role": "user", "content": "Find beachfront hotels in San Diego for less than $300 a month with free breakfast."}
111+
]
112+
113+
functions= [
114+
{
115+
"name": "search_hotels",
116+
"description": "Retrieves hotels from the search index based on the parameters provided",
117+
"parameters": {
118+
"type": "object",
119+
"properties": {
120+
"location": {
121+
"type": "string",
122+
"description": "The location of the hotel (i.e. Seattle, WA)"
123+
},
124+
"max_price": {
125+
"type": "number",
126+
"description": "The maximum price for the hotel"
127+
},
128+
"features": {
129+
"type": "string",
130+
"description": "A comma separated list of features (i.e. beachfront, free wifi, etc.)"
131+
}
132+
},
133+
"required": ["location"]
134+
}
135+
}
136+
]
137+
138+
response = client.chat.completions.create(
139+
model="gpt-35-turbo-0613", # model = "deployment_name"
140+
messages= messages,
141+
functions = functions,
142+
function_call="auto",
143+
)
144+
145+
print(response.choices[0].message.model_dump_json(indent=2))
146+
```
147+
148+
The response from the API includes a `function_call` property if the model determines that a function should be called. The `function_call` property includes the name of the function to call and the arguments to pass to the function. The arguments are a JSON string that you can parse and use to call your function.
149+
150+
```json
151+
{
152+
"content": null,
153+
"role": "assistant",
154+
"function_call": {
155+
"arguments": "{\n \"location\": \"San Diego\",\n \"max_price\": 300,\n \"features\": \"beachfront, free breakfast\"\n}",
156+
"name": "search_hotels"
157+
}
158+
}
159+
```
160+
161+
In some cases, the model may generate both `content` and a `function_call`. For example, for the prompt above the content could say something like "Sure, I can help you find some hotels in San Diego that match your criteria" along with the function_call.
162+
163+
---
164+
95165
## Working with function calling
96166

97167
The following section goes into additional detail on how to effectively use functions with the Chat Completions API.
@@ -107,6 +177,7 @@ If you want to describe a function that doesn't accept any parameters, use `{"ty
107177
### Managing the flow with functions
108178

109179
```python
180+
110181
response = openai.ChatCompletion.create(
111182
deployment_id="gpt-35-turbo-0613",
112183
messages=messages,

articles/ai-services/openai/tutorials/embeddings.md

Lines changed: 82 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: tutorial
9-
ms.date: 09/12/2023
9+
ms.date: 11/06/2023
1010
author: mrbullwinkle #noabenefraim
1111
ms.author: mbullwin
1212
recommendations: false
@@ -46,10 +46,20 @@ In this tutorial, you learn how to:
4646

4747
If you haven't already, you need to install the following libraries:
4848

49+
# [OpenAI Python 0.28.1](#tab/python)
50+
4951
```cmd
5052
pip install "openai==0.28.1" num2words matplotlib plotly scipy scikit-learn pandas tiktoken
5153
```
5254

55+
# [OpenAI Python 1.x](#tab/python-new)
56+
57+
```console
58+
pip install openai num2words matplotlib plotly scipy scikit-learn pandas tiktoken
59+
```
60+
61+
---
62+
5363
<!--Alternatively, you can use our [requirements.txt file](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/requirements.txt).-->
5464

5565
### Download the BillSum dataset
@@ -105,7 +115,9 @@ Run the following code in your preferred Python IDE:
105115

106116
<!--If you wish to view the Jupyter notebook that corresponds to this tutorial you can download the tutorial from our [samples repo](https://github.com/Azure-Samples/Azure-OpenAI-Docs-Samples/blob/main/Samples/Tutorials/Embeddings/embedding_billsum.ipynb).-->
107117

108-
## Import libraries and list models
118+
## Import libraries
119+
120+
# [OpenAI Python 0.28.1](#tab/python)
109121

110122
```python
111123
import openai
@@ -193,6 +205,23 @@ print(r.text)
193205

194206
The output of this command will vary based on the number and type of models you've deployed. In this case, we need to confirm that we have an entry for **text-embedding-ada-002**. If you find that you're missing this model, you'll need to [deploy the model](../how-to/create-resource.md#deploy-a-model) to your resource before proceeding.
195207

208+
# [OpenAI Python 1.x](#tab/python-new)
209+
210+
```python
211+
import os
212+
import re
213+
import requests
214+
import sys
215+
from num2words import num2words
216+
import os
217+
import pandas as pd
218+
import numpy as np
219+
import tiktoken
220+
from openai import AzureOpenAI
221+
```
222+
223+
---
224+
196225
Now we need to read our csv file and create a pandas DataFrame. After the initial DataFrame is created, we can view the contents of the table by running `df`.
197226

198227
```python
@@ -334,10 +363,29 @@ len(decode)
334363

335364
Now that we understand more about how tokenization works we can move on to embedding. It is important to note, that we haven't actually tokenized the documents yet. The `n_tokens` column is simply a way of making sure none of the data we pass to the model for tokenization and embedding exceeds the input token limit of 8,192. When we pass the documents to the embeddings model, it will break the documents into tokens similar (though not necessarily identical) to the examples above and then convert the tokens to a series of floating point numbers that will be accessible via vector search. These embeddings can be stored locally or in an [Azure Database to support Vector Search](../../../cosmos-db/mongodb/vcore/vector-search.md). As a result, each bill will have its own corresponding embedding vector in the new `ada_v2` column on the right side of the DataFrame.
336365

366+
# [OpenAI Python 0.28.1](#tab/python)
367+
337368
```python
338369
df_bills['ada_v2'] = df_bills["text"].apply(lambda x : get_embedding(x, engine = 'text-embedding-ada-002')) # engine should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
339370
```
340371

372+
# [OpenAI Python 1.x](#tab/python-new)
373+
374+
```python
375+
client = AzureOpenAI(
376+
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
377+
api_version = "2023-05-15",
378+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
379+
)
380+
381+
def generate_embeddings(text, model="text-embedding-ada-002"): # model = "deployment_name"
382+
return client.embeddings.create(input = [text], model=model).data[0].embedding
383+
384+
df_bills['ada_v2'] = df_bills["text"].apply(lambda x : generate_embeddings (x, model = 'text-embedding-ada-002')) # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
385+
```
386+
387+
---
388+
341389
```python
342390
df_bills
343391
```
@@ -348,6 +396,8 @@ df_bills
348396

349397
As we run the search code block below, we'll embed the search query *"Can I get information on cable company tax revenue?"* with the same **text-embedding-ada-002 (Version 2)** model. Next we'll find the closest bill embedding to the newly embedded text from our query ranked by [cosine similarity](../concepts/understand-embeddings.md).
350398

399+
# [OpenAI Python 0.28.1](#tab/python)
400+
351401
```python
352402
# search through the reviews for a specific product
353403
def search_docs(df, user_query, top_n=3, to_print=True):
@@ -369,6 +419,36 @@ def search_docs(df, user_query, top_n=3, to_print=True):
369419
res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
370420
```
371421

422+
# [OpenAI Python 1.x](#tab/python-new)
423+
424+
```python
425+
def cosine_similarity(a, b):
426+
return np.dot(a, b) / (np.linalg.norm(a) * np.linalg.norm(b))
427+
428+
def get_embedding(text, model="text-embedding-ada-002"): # model = "deployment_name"
429+
return client.embeddings.create(input = [text], model=model).data[0].embedding
430+
431+
def search_docs(df, user_query, top_n=4, to_print=True):
432+
embedding = get_embedding(
433+
user_query,
434+
model="text-embedding-ada-002" # model should be set to the deployment name you chose when you deployed the text-embedding-ada-002 (Version 2) model
435+
)
436+
df["similarities"] = df.ada_v2.apply(lambda x: cosine_similarity(x, embedding))
437+
438+
res = (
439+
df.sort_values("similarities", ascending=False)
440+
.head(top_n)
441+
)
442+
if to_print:
443+
display(res)
444+
return res
445+
446+
447+
res = search_docs(df_bills, "Can I get information on cable company tax revenue?", top_n=4)
448+
```
449+
450+
---
451+
372452
**Output**:
373453

374454
:::image type="content" source="../media/tutorials/query-result.png" alt-text="Screenshot of the formatted results of res once the search query has been run." lightbox="../media/tutorials/query-result.png":::

articles/ai-services/security-controls-policy.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Azure Policy Regulatory Compliance controls for Azure AI services
33
description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources.
4-
ms.date: 10/23/2023
4+
ms.date: 11/06/2023
55
ms.topic: sample
66
author: PatrickFarley
77
ms.author: pafarley

0 commit comments

Comments
 (0)