Skip to content

Commit 47fdffa

Browse files
authored
Merge pull request #257725 from MicrosoftDocs/main
11/6/2023 PM Publish
2 parents 18023c2 + 907cdcc commit 47fdffa

File tree

474 files changed

+1855
-2164
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

474 files changed

+1855
-2164
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -555,6 +555,11 @@
555555
"redirect_url": "/previous-versions/azure/cognitive-services/Bing-Web-Search/hit-highlighting",
556556
"redirect_document_id": false
557557
},
558+
{
559+
"source_path": "articles/cognitive-services/Bing-Web-Search/index.yml",
560+
"redirect_url": "/previous-versions/azure/cognitive-services/Bing-Web-Search/overview",
561+
"redirect_document_id": false
562+
},
558563
{
559564
"source_path": "articles/cognitive-services/Bing-Web-Search/language-support.md",
560565
"redirect_url": "/previous-versions/azure/cognitive-services/Bing-Web-Search/language-support",

articles/ai-services/openai/concepts/use-your-data.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -122,7 +122,7 @@ You can modify the following additional settings in the **Data parameters** sect
122122

123123
|Parameter name | Description |
124124
|---------|---------|
125-
|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 3. This is the `topNDocuments` parameter in the API. |
125+
|**Retrieved documents** | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. The default value is 5. This is the `topNDocuments` parameter in the API. |
126126
| **Strictness** | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. The default value is 3. |
127127

128128
## Virtual network support & private endpoint support

articles/ai-services/openai/faq.yml

Lines changed: 7 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ metadata:
77
manager: nitinme
88
ms.service: azure-ai-openai
99
ms.topic: faq
10-
ms.date: 10/16/2023
10+
ms.date: 11/06/2023
1111
ms.author: mbullwin
1212
author: mrbullwinkle
1313
title: Azure OpenAI Service frequently asked questions
@@ -22,6 +22,10 @@ sections:
2222
Azure OpenAI doesn't use customer data to retrain models. For more information, see the [Azure OpenAI data, privacy, and security guide](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context).
2323
- name: General
2424
questions:
25+
- question: |
26+
Does Azure OpenAI work with the latest Python library released by OpenAI (version>=1.0)?
27+
answer: |
28+
Azure OpenAI is supported by the latest release of the [OpenAI Python library (version>=1.0)](https://pypi.org/project/openai/). However, it is important to note migration of your codebase using `openai migrate` is not supported and will not work with code that targets Azure OpenAI.
2529
- question: |
2630
Does Azure OpenAI support GPT-4?
2731
answer: |
@@ -77,7 +81,7 @@ sections:
7781
7882
Ultimately, the model is performing next [token](/semantic-kernel/prompt-engineering/tokens) prediction in response to your question. The model doesn't have any native ability to query what model version is currently being run to answer your question. To answer this question, you can always go to **Azure OpenAI Studio** > **Management** > **Deployments** > and consult the model name column to confirm what model is currently associated with a given deployment name.
7983
80-
The questions, "What model are you running?" or "What is the latest model from OpenAI?" produce similar quality results to asking the model what the weather will be today. It might return the correct result, but purely by chance. On its own, the model has no real-world information other than what was part of its training/training data. In the case of GPT-4, as of August 2023 the underlying training data goes only up to September 2021. GPT-4 was not released until March 2023, so barring OpenAI releasing a new version with updated training data, or a new version that is fine-tuned to answer those specific questions, it's expected behavior for GPT-4 to respond that GPT-3 is the latest model release from OpenAI.
84+
The questions, "What model are you running?" or "What is the latest model from OpenAI?" produce similar quality results to asking the model what the weather will be today. It might return the correct result, but purely by chance. On its own, the model has no real-world information other than what was part of its training/training data. In the case of GPT-4, as of August 2023 the underlying training data goes only up to September 2021. GPT-4 wasn't released until March 2023, so barring OpenAI releasing a new version with updated training data, or a new version that is fine-tuned to answer those specific questions, it's expected behavior for GPT-4 to respond that GPT-3 is the latest model release from OpenAI.
8185
8286
If you wanted to help a GPT based model to accurately respond to the question "what model are you running?", you would need to provide that information to the model through techniques like [prompt engineering of the model's system message](/azure/ai-services/openai/concepts/advanced-prompt-engineering?pivots=programming-language-chat-completions), [Retrieval Augmented Generation (RAG)](/azure/machine-learning/concept-retrieval-augmented-generation?view=azureml-api-2) which is the technique used by [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data) where up-to-date information is injected to the system message at query time, or via [fine-tuning](/azure/ai-services/openai/how-to/fine-tuning?pivots=programming-language-studio) where you could fine-tune specific versions of the model to answer that question in a certain way based on model version.
8387
@@ -164,7 +168,7 @@ sections:
164168
- question: |
165169
Will my web app be overwritten when I deploy the app again from the Azure AI Studio?
166170
answer:
167-
Your app code will not be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure Cognitive Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure OpenAI Studio without any change to the appearance or functionality.
171+
Your app code won't be overwritten when you update your app. The app will be updated to use the Azure OpenAI resource, Azure Cognitive Search index (if you're using Azure OpenAI on your data), and model settings selected in the Azure OpenAI Studio without any change to the appearance or functionality.
168172
- name: Using your data
169173
questions:
170174
- question: |

articles/ai-services/openai/how-to/embeddings.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,7 @@ embeddings = response['data'][0]['embedding']
4848
print(embeddings)
4949
```
5050

51-
# [OpenAI Python 1.0](#tab/python-new)
51+
# [OpenAI Python 1.x](#tab/python-new)
5252

5353
```python
5454
import os

articles/ai-services/openai/includes/chat-completion.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -69,7 +69,7 @@ JSON formatting added artificially for ease of reading.
6969

7070
```
7171

72-
# [OpenAI Python 1.0](#tab/python-new)
72+
# [OpenAI Python 1.x](#tab/python-new)
7373

7474
```python
7575
import os
@@ -330,7 +330,7 @@ while True:
330330
print("\n" + response['choices'][0]['message']['content'] + "\n")
331331
```
332332

333-
# [OpenAI Python 1.0](#tab/python-new)
333+
# [OpenAI Python 1.x](#tab/python-new)
334334

335335
```python
336336
import os
@@ -452,7 +452,7 @@ while True:
452452
print("\n" + response['choices'][0]['message']['content'] + "\n")
453453
```
454454

455-
# [OpenAI Python 1.0](#tab/python-new)
455+
# [OpenAI Python 1.x](#tab/python-new)
456456

457457
```python
458458
import tiktoken

articles/ai-services/openai/includes/chatgpt-python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ Install the OpenAI Python client library with:
3636
pip install openai==0.28.1
3737
```
3838

39-
# [OpenAI Python 1.0](#tab/python-new)
39+
# [OpenAI Python 1.x](#tab/python-new)
4040

4141
```console
4242
pip install openai
@@ -86,7 +86,7 @@ print(response)
8686
print(response['choices'][0]['message']['content'])
8787
```
8888

89-
# [OpenAI Python 1.0](#tab/python-new)
89+
# [OpenAI Python 1.x](#tab/python-new)
9090

9191
You need to set the `model` variable to the deployment name you chose when you deployed the GPT-3.5-Turbo or GPT-4 models. Entering the model name will result in an error unless you chose a deployment name that is identical to the underlying model name.
9292

articles/ai-services/openai/includes/python.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@ Install the OpenAI Python client library with:
3535
pip install openai==0.28.1
3636
```
3737

38-
# [OpenAI Python 1.0](#tab/python-new)
38+
# [OpenAI Python 1.x](#tab/python-new)
3939

4040
```console
4141
pip install openai
@@ -95,7 +95,7 @@ text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip
9595
print(start_phrase+text)
9696
```
9797

98-
# [OpenAI Python 1.0](#tab/python-new)
98+
# [OpenAI Python 1.x](#tab/python-new)
9999

100100
```python
101101
import os

articles/ai-services/openai/reference.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ services: cognitive-services
66
manager: nitinme
77
ms.service: azure-ai-openai
88
ms.topic: conceptual
9-
ms.date: 10/05/2023
9+
ms.date: 11/06/2023
1010
author: mrbullwinkle
1111
ms.author: mbullwin
1212
recommendations: false
@@ -392,7 +392,7 @@ The following parameters can be used inside of the `parameters` field inside of
392392
| `indexName` | string | Required | null | The search index to be used. |
393393
| `fieldsMapping` | dictionary | Optional | null | Index data column mapping. |
394394
| `inScope` | boolean | Optional | true | If set, this value will limit responses specific to the grounding data content. |
395-
| `topNDocuments` | number | Optional | 3 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
395+
| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
396396
| `queryType` | string | Optional | simple | Indicates which query option will be used for Azure Cognitive Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. |
397397
| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. |
398398
| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. There’s a 100 token limit, which counts towards the overall token limit.|

articles/ai-services/policy-reference.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
---
22
title: Built-in policy definitions for Azure AI services
33
description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources.
4-
ms.date: 11/03/2023
4+
ms.date: 11/06/2023
55
author: nitinme
66
ms.author: nitinme
77
ms.service: azure-ai-services

articles/ai-services/speech-service/embedded-speech.md

Lines changed: 6 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -132,7 +132,7 @@ Follow these steps to install the Speech SDK for Java using Apache Maven:
132132
<dependency>
133133
<groupId>com.microsoft.cognitiveservices.speech</groupId>
134134
<artifactId>client-sdk-embedded</artifactId>
135-
<version>1.32.1</version>
135+
<version>1.33.0</version>
136136
</dependency>
137137
</dependencies>
138138
</project>
@@ -153,19 +153,19 @@ Be sure to use the `@aar` suffix when the dependency is specified in `build.grad
153153

154154
```
155155
dependencies {
156-
implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.32.1@aar'
156+
implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.33.0@aar'
157157
}
158158
```
159159
::: zone-end
160160
161161
162162
## Models and voices
163163
164-
For embedded speech, you'll need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
164+
For embedded speech, you need to download the speech recognition models for [speech to text](speech-to-text.md) and voices for [text to speech](text-to-speech.md). Instructions are provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
165165
166166
The following [speech to text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
167167
168-
All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for additional languages and voices.
168+
All text to speech locales [here](language-support.md?tabs=tts) (except fa-IR, Persian (Iran)) are available out of box with either 1 selected female and/or 1 selected male voices. We welcome your input to help us gauge demand for more languages and voices.
169169
170170
## Embedded speech configuration
171171
@@ -275,15 +275,15 @@ Hybrid speech with the `HybridSpeechConfig` object uses the cloud speech service
275275

276276
With hybrid speech configuration for [speech to text](speech-to-text.md) (recognition models), embedded speech is used when connection to the cloud service fails after repeated attempts. Recognition may continue using the cloud service again if the connection is later resumed.
277277

278-
With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the result is selected based on which one gives a faster response. The best result is evaluated on each synthesis request.
278+
With hybrid speech configuration for [text to speech](text-to-speech.md) (voices), embedded and cloud synthesis are run in parallel and the final result is selected based on response speed. The best result is evaluated again on each new synthesis request.
279279

280280
## Cloud speech
281281

282282
For cloud speech, you use the `SpeechConfig` object, as shown in the [speech to text quickstart](get-started-speech-to-text.md) and [text to speech quickstart](get-started-text-to-speech.md). To run the quickstarts for embedded speech, you can replace `SpeechConfig` with `EmbeddedSpeechConfig` or `HybridSpeechConfig`. Most of the other speech recognition and synthesis code are the same, whether using cloud, embedded, or hybrid configuration.
283283

284284
## Embedded voices capabilities
285285

286-
For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, please refer to the table below.
286+
For embedded voices, it is essential to note that certain SSML tags may not be currently supported due to differences in the model structure. For detailed information regarding the unsupported SSML tags, refer to the following table.
287287

288288
| Level 1 | Level 2 | Sub values | Support in embedded NTTS |
289289
|-----------------|-----------|-------------------------------------------------------|--------------------------|

0 commit comments

Comments
 (0)