Skip to content

Commit d9c03db

Browse files
Merge branch 'main' of https://github.com/MicrosoftDocs/azure-docs-pr into aca/config-server
2 parents cb328a8 + b52fff2 commit d9c03db

File tree

588 files changed

+6589
-6364
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

588 files changed

+6589
-6364
lines changed

articles/ai-services/openai/concepts/content-filter.md

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -316,6 +316,29 @@ When displaying code in your application, we strongly recommend that the applica
316316

317317
Annotations are currently available in the GA API version `2024-02-01` and in all preview versions starting from `2023-06-01-preview` for Completions and Chat Completions (GPT models). The following code snippet shows how to use annotations:
318318

319+
# [OpenAI Python 1.x](#tab/python-new)
320+
321+
```python
322+
# os.getenv() for the endpoint and key assumes that you are using environment variables.
323+
324+
import os
325+
from openai import AzureOpenAI
326+
client = AzureOpenAI(
327+
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
328+
api_version="2023-10-01-preview",
329+
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
330+
)
331+
332+
response = client.completions.create(
333+
model="gpt-35-turbo-instruct", # model = "deployment_name".
334+
prompt="{Example prompt where a severity level of low is detected}"
335+
# Content that is detected at severity level medium or high is filtered,
336+
# while content detected at severity level low isn't filtered by the content filters.
337+
)
338+
339+
print(response.model_dump_json(indent=2))
340+
```
341+
319342
# [OpenAI Python 0.28.1](#tab/python)
320343

321344
```python
@@ -456,29 +479,6 @@ except openai.error.InvalidRequestError as e:
456479

457480
```
458481

459-
# [OpenAI Python 1.x](#tab/python-new)
460-
461-
```python
462-
# os.getenv() for the endpoint and key assumes that you are using environment variables.
463-
464-
import os
465-
from openai import AzureOpenAI
466-
client = AzureOpenAI(
467-
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
468-
api_version="2023-10-01-preview",
469-
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
470-
)
471-
472-
response = client.completions.create(
473-
model="gpt-35-turbo-instruct", # model = "deployment_name".
474-
prompt="{Example prompt where a severity level of low is detected}"
475-
# Content that is detected at severity level medium or high is filtered,
476-
# while content detected at severity level low isn't filtered by the content filters.
477-
)
478-
479-
print(response.model_dump_json(indent=2))
480-
```
481-
482482
# [JavaScript](#tab/javascrit)
483483

484484
[Azure OpenAI JavaScript SDK source code & samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/openai/openai)

articles/ai-services/openai/faq.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -173,7 +173,7 @@ sections:
173173
- question: |
174174
How do I deploy a model with the REST API?
175175
answer:
176-
There are currently two different REST APIs that allow model deployment. For the latest model deployment features such as the ability to specify a model version during deployment for models like text-embedding-ada-002 Version 2, use the [Deployments - Create Or Update](/rest/api/cognitiveservices/accountmanagement/deployments/create-or-update?tabs=HTTP) REST API call.
176+
There are currently two different REST APIs that allow model deployment. For the latest model deployment features such as the ability to specify a model version during deployment for models like text-embedding-ada-002 Version 2, use the [Deployments - Create Or Update](/rest/api/aiservices/accountmanagement/deployments/create-or-update?tabs=HTTP) REST API call.
177177
- question: |
178178
Can I use quota to increase the max token limit of a model?
179179
answer:

articles/ai-services/openai/how-to/embeddings.md

Lines changed: 18 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -29,24 +29,6 @@ curl https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYM
2929
-d '{"input": "Sample Document goes here"}'
3030
```
3131

32-
# [OpenAI Python 0.28.1](#tab/python)
33-
34-
```python
35-
import openai
36-
37-
openai.api_type = "azure"
38-
openai.api_key = YOUR_API_KEY
39-
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
40-
openai.api_version = "2023-05-15"
41-
42-
response = openai.Embedding.create(
43-
input="Your text string goes here",
44-
engine="YOUR_DEPLOYMENT_NAME"
45-
)
46-
embeddings = response['data'][0]['embedding']
47-
print(embeddings)
48-
```
49-
5032
# [OpenAI Python 1.x](#tab/python-new)
5133

5234
```python
@@ -67,6 +49,24 @@ response = client.embeddings.create(
6749
print(response.model_dump_json(indent=2))
6850
```
6951

52+
# [OpenAI Python 0.28.1](#tab/python)
53+
54+
```python
55+
import openai
56+
57+
openai.api_type = "azure"
58+
openai.api_key = YOUR_API_KEY
59+
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
60+
openai.api_version = "2023-05-15"
61+
62+
response = openai.Embedding.create(
63+
input="Your text string goes here",
64+
engine="YOUR_DEPLOYMENT_NAME"
65+
)
66+
embeddings = response['data'][0]['embedding']
67+
print(embeddings)
68+
```
69+
7070
# [C#](#tab/csharp)
7171
```csharp
7272
using Azure;

articles/ai-services/openai/how-to/migration.md

Lines changed: 105 additions & 103 deletions
Original file line numberDiff line numberDiff line change
@@ -41,16 +41,16 @@ As this is a new version of the library with breaking changes, you should test y
4141

4242
To make the migration process easier, we're updating existing code examples in our docs for Python to a tabbed experience:
4343

44-
# [OpenAI Python 0.28.1](#tab/python)
44+
# [OpenAI Python 1.x](#tab/python-new)
4545

4646
```console
47-
pip install openai==0.28.1
47+
pip install openai --upgrade
4848
```
4949

50-
# [OpenAI Python 1.x](#tab/python-new)
50+
# [OpenAI Python 0.28.1](#tab/python)
5151

5252
```console
53-
pip install openai --upgrade
53+
pip install openai==0.28.1
5454
```
5555

5656
---
@@ -59,32 +59,6 @@ This provides context for what has changed and allows you to test the new librar
5959

6060
## Chat completions
6161

62-
# [OpenAI Python 0.28.1](#tab/python)
63-
64-
You need to set the `engine` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.
65-
66-
```python
67-
import os
68-
import openai
69-
openai.api_type = "azure"
70-
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
71-
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
72-
openai.api_version = "2023-05-15"
73-
74-
response = openai.ChatCompletion.create(
75-
engine="gpt-35-turbo", # engine = "deployment_name".
76-
messages=[
77-
{"role": "system", "content": "You are a helpful assistant."},
78-
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
79-
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
80-
{"role": "user", "content": "Do other Azure AI services support this too?"}
81-
]
82-
)
83-
84-
print(response)
85-
print(response['choices'][0]['message']['content'])
86-
```
87-
8862
# [OpenAI Python 1.x](#tab/python-new)
8963

9064
You need to set the `model` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name results in an error unless you chose a deployment name that is identical to the underlying model name.
@@ -114,31 +88,36 @@ print(response.choices[0].message.content)
11488

11589
Additional examples can be found in our [in-depth Chat Completion article](chatgpt.md).
11690

117-
---
118-
119-
## Completions
120-
12191
# [OpenAI Python 0.28.1](#tab/python)
12292

93+
You need to set the `engine` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.
94+
12395
```python
12496
import os
12597
import openai
126-
98+
openai.api_type = "azure"
99+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
127100
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
128-
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
129-
openai.api_type = 'azure'
130-
openai.api_version = '2023-05-15' # this might change in the future
101+
openai.api_version = "2023-05-15"
131102

132-
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
103+
response = openai.ChatCompletion.create(
104+
engine="gpt-35-turbo", # engine = "deployment_name".
105+
messages=[
106+
{"role": "system", "content": "You are a helpful assistant."},
107+
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
108+
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
109+
{"role": "user", "content": "Do other Azure AI services support this too?"}
110+
]
111+
)
133112

134-
# Send a completion call to generate an answer
135-
print('Sending a test completion job')
136-
start_phrase = 'Write a tagline for an ice cream shop. '
137-
response = openai.Completion.create(engine=deployment_name, prompt=start_phrase, max_tokens=10)
138-
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
139-
print(start_phrase+text)
113+
print(response)
114+
print(response['choices'][0]['message']['content'])
140115
```
141116

117+
---
118+
119+
## Completions
120+
142121
# [OpenAI Python 1.x](#tab/python-new)
143122

144123
```python
@@ -160,28 +139,31 @@ response = client.completions.create(model=deployment_name, prompt=start_phrase,
160139
print(response.choices[0].text)
161140
```
162141

163-
---
164-
165-
## Embeddings
166-
167142
# [OpenAI Python 0.28.1](#tab/python)
168143

169144
```python
145+
import os
170146
import openai
171147

172-
openai.api_type = "azure"
173-
openai.api_key = YOUR_API_KEY
174-
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
175-
openai.api_version = "2023-05-15"
148+
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
149+
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
150+
openai.api_type = 'azure'
151+
openai.api_version = '2023-05-15' # this might change in the future
176152

177-
response = openai.Embedding.create(
178-
input="Your text string goes here",
179-
engine="YOUR_DEPLOYMENT_NAME"
180-
)
181-
embeddings = response['data'][0]['embedding']
182-
print(embeddings)
153+
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
154+
155+
# Send a completion call to generate an answer
156+
print('Sending a test completion job')
157+
start_phrase = 'Write a tagline for an ice cream shop. '
158+
response = openai.Completion.create(engine=deployment_name, prompt=start_phrase, max_tokens=10)
159+
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
160+
print(start_phrase+text)
183161
```
184162

163+
---
164+
165+
## Embeddings
166+
185167
# [OpenAI Python 1.x](#tab/python-new)
186168

187169
```python
@@ -204,6 +186,24 @@ print(response.model_dump_json(indent=2))
204186

205187
Additional examples including how to handle semantic text search without `embeddings_utils.py` can be found in our [embeddings tutorial](../tutorials/embeddings.md).
206188

189+
# [OpenAI Python 0.28.1](#tab/python)
190+
191+
```python
192+
import openai
193+
194+
openai.api_type = "azure"
195+
openai.api_key = YOUR_API_KEY
196+
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
197+
openai.api_version = "2023-05-15"
198+
199+
response = openai.Embedding.create(
200+
input="Your text string goes here",
201+
engine="YOUR_DEPLOYMENT_NAME"
202+
)
203+
embeddings = response['data'][0]['embedding']
204+
print(embeddings)
205+
```
206+
207207
---
208208

209209
## Async
@@ -260,6 +260,52 @@ print(completion.model_dump_json(indent=2))
260260
## Use your data
261261

262262
For the full configuration steps that are required to make these code examples work, consult the [use your data quickstart](../use-your-data-quickstart.md).
263+
264+
# [OpenAI Python 1.x](#tab/python-new)
265+
266+
```python
267+
import os
268+
import openai
269+
import dotenv
270+
271+
dotenv.load_dotenv()
272+
273+
endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
274+
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
275+
deployment = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
276+
277+
client = openai.AzureOpenAI(
278+
base_url=f"{endpoint}/openai/deployments/{deployment}/extensions",
279+
api_key=api_key,
280+
api_version="2023-08-01-preview",
281+
)
282+
283+
completion = client.chat.completions.create(
284+
model=deployment,
285+
messages=[
286+
{
287+
"role": "user",
288+
"content": "How is Azure machine learning different than Azure OpenAI?",
289+
},
290+
],
291+
extra_body={
292+
"dataSources": [
293+
{
294+
"type": "AzureCognitiveSearch",
295+
"parameters": {
296+
"endpoint": os.environ["AZURE_AI_SEARCH_ENDPOINT"],
297+
"key": os.environ["AZURE_AI_SEARCH_API_KEY"],
298+
"indexName": os.environ["AZURE_AI_SEARCH_INDEX"]
299+
}
300+
}
301+
]
302+
}
303+
)
304+
305+
print(completion.model_dump_json(indent=2))
306+
```
307+
308+
263309
# [OpenAI Python 0.28.1](#tab/python)
264310

265311
```python
@@ -319,50 +365,6 @@ completion = openai.ChatCompletion.create(
319365
print(completion)
320366
```
321367

322-
# [OpenAI Python 1.x](#tab/python-new)
323-
324-
```python
325-
import os
326-
import openai
327-
import dotenv
328-
329-
dotenv.load_dotenv()
330-
331-
endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
332-
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
333-
deployment = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
334-
335-
client = openai.AzureOpenAI(
336-
base_url=f"{endpoint}/openai/deployments/{deployment}/extensions",
337-
api_key=api_key,
338-
api_version="2023-08-01-preview",
339-
)
340-
341-
completion = client.chat.completions.create(
342-
model=deployment,
343-
messages=[
344-
{
345-
"role": "user",
346-
"content": "How is Azure machine learning different than Azure OpenAI?",
347-
},
348-
],
349-
extra_body={
350-
"dataSources": [
351-
{
352-
"type": "AzureCognitiveSearch",
353-
"parameters": {
354-
"endpoint": os.environ["AZURE_AI_SEARCH_ENDPOINT"],
355-
"key": os.environ["AZURE_AI_SEARCH_API_KEY"],
356-
"indexName": os.environ["AZURE_AI_SEARCH_INDEX"]
357-
}
358-
}
359-
]
360-
}
361-
)
362-
363-
print(completion.model_dump_json(indent=2))
364-
```
365-
366368
---
367369

368370
## DALL-E fix

0 commit comments

Comments
 (0)