Skip to content

Commit d51da64

Browse files
committed
Merge branch 'release-postgres-flexible' of https://github.com/MicrosoftDocs/azure-docs-pr into postgres-terms2
2 parents b49e404 + d33e05b commit d51da64

File tree

358 files changed

+4684
-2755
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

358 files changed

+4684
-2755
lines changed

.openpublishing.redirection.json

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -23678,6 +23678,11 @@
2367823678
"source_path_from_root": "/articles/aks/ai-toolchain-operator.md",
2367923679
"redirect_url": "https://azure.microsoft.com/updates/preview-ai-toolchain-operator-addon-for-aks/",
2368023680
"redirect_document_id": false
23681+
},
23682+
{
23683+
"source_path_from_root": "/articles/reliability/disaster-recovery-guidance-overview.md",
23684+
"redirect_url": "/azure/reliability/reliability-guidance-overview",
23685+
"redirect_document_id": false
2368123686
}
2368223687

2368323688
]

articles/ai-services/language-service/summarization/includes/quickstarts/rest-api.md

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -4,13 +4,17 @@ author: jboback
44
manager: nitinme
55
ms.service: azure-ai-language
66
ms.topic: include
7-
ms.date: 02/17/2023
7+
ms.date: 12/13/2023
88
ms.author: aahi
99
ms.custom: ignite-fall-2021, event-tier1-build-2022, ignite-2022
1010
---
1111

12+
# [Document summarization](#tab/document-summarization)
13+
1214
# [Conversation summarization](#tab/conversation-summarization)
1315

16+
---
17+
1418
Use this quickstart to send text summarization requests using the REST API. In the following example, you will use cURL to summarize documents or text-based customer service conversations.
1519

1620
[!INCLUDE [Use Language Studio](../use-language-studio.md)]
@@ -111,8 +115,6 @@ curl -X GET $LANGUAGE_ENDPOINT/language/analyze-text/jobs/<my-job-id>?api-versio
111115
-H "Ocp-Apim-Subscription-Key: $LANGUAGE_KEY"
112116
```
113117

114-
115-
116118
### Document extractive summarization example JSON response
117119

118120
```json
@@ -189,6 +191,8 @@ curl -X GET $LANGUAGE_ENDPOINT/language/analyze-text/jobs/<my-job-id>?api-versio
189191
}
190192
```
191193

194+
# [Conversation summarization](#tab/conversation-summarization)
195+
192196
## Conversation issue and resolution summarization
193197

194198
The following example will get you started with conversation issue and resolution summarization:
@@ -296,8 +300,6 @@ curl -X GET $LANGUAGE_ENDPOINT/language/analyze-conversations/jobs/<my-job-id>?a
296300
-H "Ocp-Apim-Subscription-Key: $LANGUAGE_KEY"
297301
```
298302

299-
300-
301303
### Conversation issue and resolution summarization example JSON response
302304

303305
```json

articles/ai-services/openai/concepts/models.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -115,7 +115,7 @@ See [model versions](../concepts/model-versions.md) to learn about how Azure Ope
115115

116116
### GPT-3.5 models
117117

118-
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo (0301) can also be used with the Completions API. GPT3.5 Turbo (0613) only supports the Chat Completions API.
118+
GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo version 0301 can also be used with the Completions API. GPT-3.5 Turbo versions 0613 and 1106 only support the Chat Completions API.
119119

120120
GPT-3.5 Turbo version 0301 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.
121121

articles/ai-services/openai/how-to/gpt-with-vision.md

Lines changed: 34 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -143,6 +143,14 @@ Every response includes a `"finish_details"` field. The subfield `"type"` has th
143143

144144
If `finish_details.type` is `stop`, then there is another `"stop"` property that specifies the token that caused the output to end.
145145

146+
## Detail parameter settings in image processing: Low, High, Auto
147+
148+
The detail parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
149+
- `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
150+
- `high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.''
151+
152+
For details on how the image parameters impact tokens used and pricing please see - [What is OpenAI? Image Tokens with GPT-4 Turbo with Vision](../overview.md#image-tokens-gpt-4-turbo-with-vision)
153+
146154
## Use Vision enhancement with images
147155

148156
GPT-4 Turbo with Vision provides exclusive access to Azure AI Services tailored enhancements. When combined with Azure AI Vision, it enhances your chat experience by providing the chat model with more detailed information about visible text in the image and the locations of objects.
@@ -396,11 +404,33 @@ Every response includes a `"finish_details"` field. The subfield `"type"` has th
396404

397405
If `finish_details.type` is `stop`, then there is another `"stop"` property that specifies the token that caused the output to end.
398406

399-
## Detail parameter settings in image processing: Low, High, Auto
407+
### Pricing example for Video prompts
408+
The pricing for GPT-4 Turbo with Vision is dynamic and depends on the specific features and inputs used. For a comprehensive view of Azure OpenAI pricing see [Azure OpenAI Pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/).
400409

401-
The detail parameter in the model offers three choices: `low`, `high`, or `auto`, to adjust the way the model interprets and processes images. The default setting is auto, where the model decides between low or high based on the size of the image input.
402-
- `low` setting: the model does not activate the "high res" mode, instead processes a lower resolution 512x512 version, resulting in quicker responses and reduced token consumption for scenarios where fine detail isn't crucial.
403-
- `high` setting: the model activates "high res" mode. Here, the model initially views the low-resolution image and then generates detailed 512x512 segments from the input image. Each segment uses double the token budget, allowing for a more detailed interpretation of the image.
410+
The base charges and additional features are outlined below:
411+
412+
Base Pricing for GPT-4 Turbo with Vision is:
413+
- Input: $0.01 per 1000 tokens
414+
- Output: $0.03 per 1000 tokens
415+
416+
Video prompt integration with Video Retrieval Add-on:
417+
- Ingestion: $0.05 per minute of video
418+
- Transactions: $0.25 per 1000 queries of the Video Retrieval index
419+
420+
Processing videos will involve the use of extra tokens to identify key frames for analysis. The number of these additional tokens will be roughly equivalent to the sum of the tokens in the text input plus 700 tokens.
421+
422+
#### Calculation
423+
For a typical use case let's imagine that I have use a 3-minute video with a 100-token prompt input. The section of video has a transcript that's 100-tokens long and when I process the prompt, I generate 100-tokens of output. The pricing for this transaction would be as follows:
424+
425+
| Item | Detail | Total Cost |
426+
|-------------------------------------------|---------------------------------------------------------------|--------------|
427+
| GPT-4 Turbo with Vision Input Tokens | 100 text tokens | $0.001 |
428+
| Additional Cost to identify frames | 100 input tokens + 700 tokens + 1 Video Retrieval txn | $0.00825 |
429+
| Image Inputs and Transcript Input | 20 images (85 tokens each) + 100 transcript tokens | $0.018 |
430+
| Output Tokens | 100 tokens (assumed) | $0.003 |
431+
| **Total Cost** | | **$0.03025** |
432+
433+
Additionally, there's a one-time indexing cost of $0.15 to generate the Video Retrieval index for this 3-minute segment of video. This index can be reused across any number of Video Retrieval and GPT-4 Turbo with Vision calls.
404434

405435
## Limitations
406436

articles/ai-services/speech-service/embedded-speech.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ Follow these steps to install the Speech SDK for Java using Apache Maven:
131131
<dependency>
132132
<groupId>com.microsoft.cognitiveservices.speech</groupId>
133133
<artifactId>client-sdk-embedded</artifactId>
134-
<version>1.33.0</version>
134+
<version>1.34.0</version>
135135
</dependency>
136136
</dependencies>
137137
</project>
@@ -152,7 +152,7 @@ Be sure to use the `@aar` suffix when the dependency is specified in `build.grad
152152

153153
```
154154
dependencies {
155-
implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.33.0@aar'
155+
implementation 'com.microsoft.cognitiveservices.speech:client-sdk-embedded:1.34.0@aar'
156156
}
157157
```
158158
::: zone-end

articles/ai-services/speech-service/includes/quickstarts/captioning/java.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -43,7 +43,7 @@ Before you can do anything, you need to [install the Speech SDK](~/articles/ai-s
4343
<dependency>
4444
<groupId>com.microsoft.cognitiveservices.speech</groupId>
4545
<artifactId>client-sdk</artifactId>
46-
<version>1.33.0</version>
46+
<version>1.34.0</version>
4747
</dependency>
4848
</dependencies>
4949
</project>

articles/ai-services/speech-service/includes/quickstarts/platform/java-android.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,6 +50,6 @@ Add the Speech SDK as a dependency in your project.
5050

5151
:::image type="content" source="../../../media/sdk/android-studio/sdk-install-3-zoom.png" alt-text="Screenshot that shows how to add a library dependency in Android Studio." lightbox="../../../media/sdk/android-studio/sdk-install-3.png":::
5252

53-
1. In the **Add Library Dependency** window that appears, enter the name and version of the Speech SDK for Java: *com.microsoft.cognitiveservices.speech:client-sdk:1.33.0*. Then select **Search**.
53+
1. In the **Add Library Dependency** window that appears, enter the name and version of the Speech SDK for Java: *com.microsoft.cognitiveservices.speech:client-sdk:1.34.0*. Then select **Search**.
5454
1. Make sure that the selected **Group ID** is **com.microsoft.cognitiveservices.speech**, and then select **OK**.
5555
1. Select **OK** to close the **Project Structure** window and apply your changes to the project.

articles/ai-services/speech-service/includes/quickstarts/platform/java-jre.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -52,7 +52,7 @@ Follow these steps to install the Speech SDK for Java using Apache Maven:
5252
<dependency>
5353
<groupId>com.microsoft.cognitiveservices.speech</groupId>
5454
<artifactId>client-sdk</artifactId>
55-
<version>1.33.0</version>
55+
<version>1.34.0</version>
5656
</dependency>
5757
</dependencies>
5858
</project>
@@ -107,7 +107,7 @@ Follow these steps to install the Speech SDK for Java using Apache Maven:
107107
<dependency>
108108
<groupId>com.microsoft.cognitiveservices.speech</groupId>
109109
<artifactId>client-sdk</artifactId>
110-
<version>1.33.0</version>
110+
<version>1.34.0</version>
111111
</dependency>
112112
</dependencies>
113113
```
@@ -124,7 +124,7 @@ Gradle configurations require an explicit reference to the *.jar* dependency ext
124124
// build.gradle
125125
126126
dependencies {
127-
implementation group: 'com.microsoft.cognitiveservices.speech', name: 'client-sdk', version: "1.33.0", ext: "jar"
127+
implementation group: 'com.microsoft.cognitiveservices.speech', name: 'client-sdk', version: "1.34.0", ext: "jar"
128128
}
129129
```
130130

articles/ai-services/speech-service/includes/quickstarts/platform/objectivec.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The macOS CocoaPod package is available for download and use with the [Xcode 9.4
3434
use_frameworks!
3535

3636
target 'AppName' do
37-
pod 'MicrosoftCognitiveServicesSpeech-macOS', '~> 1.33.0'
37+
pod 'MicrosoftCognitiveServicesSpeech-macOS', '~> 1.34.0'
3838
end
3939
```
4040

@@ -65,7 +65,7 @@ The macOS CocoaPod package is available for download and use with the [Xcode 9.4
6565
use_frameworks!
6666

6767
target 'AppName' do
68-
pod 'MicrosoftCognitiveServicesSpeech-iOS', '~> 1.33.0'
68+
pod 'MicrosoftCognitiveServicesSpeech-iOS', '~> 1.34.0'
6969
end
7070
```
7171

articles/ai-services/speech-service/includes/quickstarts/platform/swift.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ The macOS CocoaPod package is available for download and use with the [Xcode 9.4
3434
use_frameworks!
3535

3636
target 'AppName' do
37-
pod 'MicrosoftCognitiveServicesSpeech-macOS', '~> 1.33.0'
37+
pod 'MicrosoftCognitiveServicesSpeech-macOS', '~> 1.34.0'
3838
end
3939
```
4040

@@ -65,7 +65,7 @@ The macOS CocoaPod package is available for download and use with the [Xcode 9.4
6565
use_frameworks!
6666

6767
target 'AppName' do
68-
pod 'MicrosoftCognitiveServicesSpeech-iOS', '~> 1.33.0'
68+
pod 'MicrosoftCognitiveServicesSpeech-iOS', '~> 1.34.0'
6969
end
7070
```
7171

0 commit comments

Comments
 (0)