You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-arc/system-center-virtual-machine-manager/overview.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -19,7 +19,7 @@ Azure Arc-enabled System Center Virtual Machine Manager allows you to manage you
19
19
20
20
Arc-enabled System Center VMM allows you to:
21
21
22
-
- Perform various VM lifecycle operations such as start, stop, pause, delete VMs on VMM managed VMs directly from Azure.
22
+
- Perform various VM lifecycle operations such as start, stop, pause, and delete VMs on VMM managed VMs directly from Azure.
23
23
- Empower developers and application teams to self-serve VM operations on-demand using [Azure role-based access control (RBAC)](../../role-based-access-control/overview.md).
24
24
- Browse your VMM resources (VMs, templates, VM networks, and storage) in Azure, providing you a single pane view for your infrastructure across both environments.
25
25
- Discover and onboard existing SCVMM managed VMs to Azure.
@@ -72,4 +72,4 @@ For a complete list of network requirements for Azure Arc features and Azure Arc
72
72
73
73
## Next steps
74
74
75
-
[See how to create a Azure Arc VM](create-virtual-machine.md)
75
+
[See how to create a Azure Arc VM](create-virtual-machine.md)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/batch-transcription.md
+3Lines changed: 3 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,6 +35,9 @@ To get started with batch transcription, refer to the following how-to guides:
35
35
36
36
Batch transcription jobs are scheduled on a best-effort basis. You can't estimate when a job will change into the running state, but it should happen within minutes under normal system load. When the job is in the running state, the transcription occurs faster than the audio runtime playback speed.
37
37
38
+
>[!NOTE]
39
+
> You can also use Batch Transcription in Power Platform applications (Power Automate, Power Apps, Logic Apps) via the [Batch Speech-to-text Connector](https://learn.microsoft.com/connectors/cognitiveservicesspe/) with your own Speech resource. Learn more about [Power Platform](https://learn.microsoft.com/power-platform/) and the [connectors](https://learn.microsoft.com/connectors/).
40
+
>
38
41
## Next steps
39
42
40
43
-[Locate audio files for batch transcription](batch-transcription-audio-data.md)
Copy file name to clipboardExpand all lines: articles/cognitive-services/Speech-Service/voice-assistants.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -46,7 +46,7 @@ Whether you choose [Direct Line Speech](direct-line-speech.md) or [Custom Comman
46
46
|----------|----------|
47
47
|[Custom keyword](./custom-keyword-basics.md) | Users can start conversations with assistants by using a custom keyword such as "Hey Contoso." An app does this with a custom keyword engine in the Speech SDK, which you can configure by going to [Get started with custom keywords](./custom-keyword-basics.md). Voice assistants can use service-side keyword verification to improve the accuracy of the keyword activation (versus using the device alone).
48
48
|[Speech-to-text](speech-to-text.md) | Voice assistants convert real-time audio into recognized text by using [speech-to-text](speech-to-text.md) from the Speech service. This text is available, as it's transcribed, to both your assistant implementation and your client application.
49
-
|[Text-to-speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to Speech (Neural TTS) voice that gives a voice to your brand. To learn more, [contact us](mailto:[email protected]).
49
+
|[Text-to-speech](text-to-speech.md) | Textual responses from your assistant are synthesized through [text-to-speech](text-to-speech.md) from the Speech service. This synthesis is then made available to your client application as an audio stream. Microsoft offers the ability to build your own custom, high-quality Neural Text to Speech (Neural TTS) voice that gives a voice to your brand.
@@ -19,7 +19,7 @@ Azure OpenAI provides access to many different models, grouped by family and cap
19
19
20
20
| Model family | Description |
21
21
|--|--|
22
-
|[GPT-3](#gpt-3-models)| A series of models that can understand and generate natural language. |
22
+
|[GPT-3](#gpt-3-models)| A series of models that can understand and generate natural language. This includes the new [ChatGPT model](#chatgpt-gpt-35-turbo). |
23
23
|[Codex](#codex-models)| A series of models that can understand and generate code, including translating natural language to code. |
24
24
|[Embeddings](#embeddings-models)| A set of models that can understand and use embeddings. An embedding is a special format of data representation that can be easily utilized by machine learning models and algorithms. The embedding is an information dense representation of the semantic meaning of a piece of text. Currently, we offer three families of Embeddings models for different functionalities: similarity, text search, and code search. |
25
25
@@ -92,6 +92,12 @@ Ada is usually the fastest model and can perform tasks like parsing text, addres
The ChatGPT model (gpt-35-turbo) is a language model designed for conversational interfaces and the model behaves differently than previous GPT-3 models. Previous models were text-in and text-out, meaning they accepted a prompt string and returned a completion to append to the prompt. However, the ChatGPT model is conversation-in and message-out. The model expects a prompt string formatted in a specific chat-like transcript format, and returns a completion that represents a model-written message in the chat.
98
+
99
+
The ChatGPT model uses the same completion API that you use for other models like text-davinci-002, but it requires a unique prompt format. It's important to use the new prompt format to get the best results. Without the right prompts, the model tends to be verbose and provides less useful responses. To learn more check out our [in-depth how-to](../how-to/chatgpt.md).
100
+
95
101
## Codex models
96
102
97
103
The Codex models are descendants of our base GPT-3 models that can understand and generate code. Their training data contains both natural language and billions of lines of public code from GitHub.
@@ -168,6 +174,7 @@ When using our embeddings models, keep in mind their limitations and risks.
168
174
| text-davinci-002 | Yes | No | East US, South Central US, West Europe | N/A |
169
175
| text-davinci-003 | Yes | No | East US | N/A |
170
176
| text-davinci-fine-tune-002<sup>1</sup> | Yes | No | N/A | East US, West Europe |
177
+
| gpt-35-turbo (ChatGPT) | Yes | No | N/A | East US, South Central US |
171
178
172
179
<sup>1</sup> The model is available by request only. Currently we aren't accepting new requests to use the model.
173
180
<br><sup>2</sup> East US is currently unavailable for new customers to fine-tune due to high demand. Please use US South Central region for US based training.
0 commit comments