Skip to content

Commit c3c1366

Browse files
authored
Merge pull request #109236 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 5ca9ce9 + 4869817 commit c3c1366

11 files changed

+40
-20
lines changed

articles/aks/gpu-cluster.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Get the credentials for your AKS cluster using the [az aks get-credentials][az-a
5151
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
5252
```
5353

54-
## Install nVidia drivers
54+
## Install NVIDIA drivers
5555

5656
Before the GPUs in the nodes can be used, you must deploy a DaemonSet for the NVIDIA device plugin. This DaemonSet runs a pod on each node to provide the required drivers for the GPUs.
5757

@@ -64,12 +64,15 @@ kubectl create namespace gpu-resources
6464
Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest. This manifest is provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github].
6565

6666
```yaml
67-
apiVersion: extensions/v1beta1
67+
apiVersion: apps/v1
6868
kind: DaemonSet
6969
metadata:
7070
name: nvidia-device-plugin-daemonset
7171
namespace: gpu-resources
7272
spec:
73+
selector:
74+
matchLabels:
75+
name: nvidia-device-plugin-ds
7376
updateStrategy:
7477
type: RollingUpdate
7578
template:
@@ -106,7 +109,7 @@ spec:
106109
path: /var/lib/kubelet/device-plugins
107110
```
108111
109-
Now use the [kubectl apply][kubectl-apply] command to create the DaemonSet and confirm the nVidia device plugin is created successfully, as shown in the following example output:
112+
Now use the [kubectl apply][kubectl-apply] command to create the DaemonSet and confirm the NVIDIA device plugin is created successfully, as shown in the following example output:
110113
111114
```console
112115
$ kubectl apply -f nvidia-device-plugin-ds.yaml
@@ -184,7 +187,7 @@ To see the GPU in action, schedule a GPU-enabled workload with the appropriate r
184187
Create a file named *samples-tf-mnist-demo.yaml* and paste the following YAML manifest. The following job manifest includes a resource limit of `nvidia.com/gpu: 1`:
185188

186189
> [!NOTE]
187-
> If you receive a version mismatch error when calling into drivers, such as, CUDA driver version is insufficient for CUDA runtime version, review the nVidia driver matrix compatibility chart - [https://docs.nvidia.com/deploy/cuda-compatibility/index.html](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
190+
> If you receive a version mismatch error when calling into drivers, such as, CUDA driver version is insufficient for CUDA runtime version, review the NVIDIA driver matrix compatibility chart - [https://docs.nvidia.com/deploy/cuda-compatibility/index.html](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
188191
189192
```yaml
190193
apiVersion: batch/v1

articles/aks/windows-node-limitations.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ AKS clusters with Windows node pools must use the Azure CNI (advanced) networkin
5050

5151
## Can I change the max. # of pods per node?
5252

53-
It is currently a requirement to be set to a maximum of 30 pods to ensure the reliability of your clusters.
53+
Yes. For the implications and options that are available, see [Maximum number of pods][maximum-number-of-pods].
5454

5555
## How do patch my Windows nodes?
5656

@@ -118,3 +118,4 @@ To get started with Windows Server containers in AKS, [create a node pool that r
118118
[nodepool-limitations]: use-multiple-node-pools.md#limitations
119119
[preview-support]: support-policies.md#preview-features-or-feature-flags
120120
[windows-container-compat]: /virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2019%2Cwindows-10-1909
121+
[maximum-number-of-pods]: configure-azure-cni.md#maximum-pods-per-node

articles/app-service-mobile/app-service-mobile-windows-store-dotnet-get-started-push.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ If you do not use the downloaded quick start server project, you will need the p
2424

2525
## Register your app for push notifications
2626

27-
You need to submit your app to the Microsoft Store, then configure your server project to integrate with [Windows Notification Services (WNS)](https://docs.microsoft.com/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) to send push.
27+
You need to submit your app to the Microsoft Store, then configure your server project to integrate with [Windows Push Notification Services (WNS)](https://docs.microsoft.com/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) to send push.
2828

2929
1. In Visual Studio Solution Explorer, right-click the UWP app project, click **Store** > **Associate App with the Store...**.
3030

articles/cognitive-services/Custom-Vision-Service/node-tutorial.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ Add the following code to your script to create a new Custom Vision service proj
4747
```javascript
4848
const util = require('util');
4949
const fs = require('fs');
50-
const TrainingApiClient = require("@azure/cognitiveservices-customvision-training");
51-
const PredictionApiClient = require("@azure/cognitiveservices-customvision-prediction");
50+
const TrainingApi = require("@azure/cognitiveservices-customvision-training");
51+
const PredictionApi = require("@azure/cognitiveservices-customvision-prediction");
5252

5353
const setTimeoutPromise = util.promisify(setTimeout);
5454

@@ -61,7 +61,7 @@ const endPoint = "https://<my-resource-name>.cognitiveservices.azure.com/"
6161

6262
const publishIterationName = "classifyModel";
6363

64-
const trainer = new TrainingApiClient(trainingKey, endPoint);
64+
const trainer = new TrainingApi.TrainingAPIClient(trainingKey, endPoint);
6565

6666
(async () => {
6767
console.log("Creating project...");
@@ -129,7 +129,7 @@ await trainer.publishIteration(sampleProject.id, trainingIteration.id, publishIt
129129
To send an image to the prediction endpoint and retrieve the prediction, add the following code to the end of the file:
130130
131131
```javascript
132-
const predictor = new PredictionApiClient(predictionKey, endPoint);
132+
const predictor = new PredictionApi.PredictionAPIClient(predictionKey, endPoint);
133133
const testFile = fs.readFileSync(`${sampleDataRoot}/Test/test_image.jpg`);
134134

135135
const results = await predictor.classifyImage(sampleProject.id, publishIterationName, testFile);

articles/cognitive-services/Speech-Service/how-to-async-conversation-transcription.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ Conversation conversation = conversationFuture.get();
5151

5252
// Create an audio stream from a wav file or from the default microphone if you want to stream live audio from the supported devices
5353
// Replace with your own audio file name and Helper class which implements AudioConfig using PullAudioInputStreamCallback
54-
PullAudioInputStreamCallback wavfilePullStreamCallback = Helper.OpenWavFile("16Khz16Bits8channelsOfRecordedPCMAudio.wav");
55-
// Create an audio stream format assuming the file used above is 16Khz, 16 bits and 8 channel pcm wav file
54+
PullAudioInputStreamCallback wavfilePullStreamCallback = Helper.OpenWavFile("16kHz16Bits8channelsOfRecordedPCMAudio.wav");
55+
// Create an audio stream format assuming the file used above is 16kHz, 16 bits and 8 channel pcm wav file
5656
AudioStreamFormat audioStreamFormat = AudioStreamFormat.getWaveFormatPCM((long)16000, (short)16,(short)8);
5757
// Create an input stream
5858
AudioInputStream audioStream = AudioInputStream.createPullStream(wavfilePullStreamCallback, audioStreamFormat);

articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The Speech service is largely similar to Bing Speech, with the following differe
4444
| Partial or interim results | :heavy_check_mark: | :heavy_check_mark: | With WebSockets protocol or SDK. |
4545
| Custom speech models | :heavy_check_mark: | :heavy_check_mark: | Bing Speech requires a separate Custom Speech subscription. |
4646
| Custom voice fonts | :heavy_check_mark: | :heavy_check_mark: | Bing Speech requires a separate Custom Voice subscription. |
47-
| 24-KHz voices | :heavy_minus_sign: | :heavy_check_mark: |
47+
| 24-kHz voices | :heavy_minus_sign: | :heavy_check_mark: |
4848
| Speech intent recognition | Requires separate LUIS API call | Integrated (with SDK) | You can use a LUIS key with the Speech service. |
4949
| Simple intent recognition | :heavy_minus_sign: | :heavy_check_mark: |
5050
| Batch transcription of long audio files | :heavy_minus_sign: | :heavy_check_mark: |

articles/cognitive-services/Speech-Service/includes/supported-audio-formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.date: 03/16/2020
66
ms.author: dapine
77
---
88

9-
The default audio streaming format is WAV (16 KHz or 8Khz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported. [Additional configuration](../how-to-use-codec-compressed-audio-input-streams.md) is needed to enable the formats listed below.
9+
The default audio streaming format is WAV (16kHz or 8kHz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported. [Additional configuration](../how-to-use-codec-compressed-audio-input-streams.md) is needed to enable the formats listed below.
1010

1111
- MP3
1212
- OPUS/OGG

articles/cognitive-services/Speech-Service/quickstart-python-text-to-speech.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Next, you need to add required headers for the request. Make sure that you updat
9494
Then construct the request body using Speech Synthesis Markup Language (SSML). This sample defines the structure, and uses the `tts` input you created earlier.
9595

9696
>[!NOTE]
97-
> This sample uses the `Guy24KRUS` voice font. For a complete list of Microsoft provided voices/languages, see [Language support](language-support.md).
97+
> This sample uses the `Guy24kRUS` voice font. For a complete list of Microsoft provided voices/languages, see [Language support](language-support.md).
9898
> If you're interested in creating a unique, recognizable voice for your brand, see [Creating custom voice fonts](how-to-customize-voice-font.md).
9999
100100
Finally, you'll make a request to the service. If the request is successful, and a 200 status code is returned, the speech response is written to a timestamped file.
@@ -117,7 +117,7 @@ def save_audio(self):
117117
voice = ElementTree.SubElement(xml_body, 'voice')
118118
voice.set('{http://www.w3.org/XML/1998/namespace}lang', 'en-US')
119119
voice.set(
120-
'name', 'Microsoft Server Speech Text to Speech Voice (en-US, Guy24KRUS)')
120+
'name', 'Microsoft Server Speech Text to Speech Voice (en-US, Guy24kRUS)')
121121
voice.text = self.tts
122122
body = ElementTree.tostring(xml_body)
123123

articles/cognitive-services/Speech-Service/speech-synthesis-markup.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -452,7 +452,7 @@ For more information on the detailed Speech service phonetic alphabet, see the [
452452

453453
## Adjust prosody
454454

455-
The `prosody` element is used to specify changes to pitch, countour, range, rate, duration, and volume for the text-to-speech output. The `prosody` element may contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
455+
The `prosody` element is used to specify changes to pitch, contour, range, rate, duration, and volume for the text-to-speech output. The `prosody` element may contain text and the following elements: `audio`, `break`, `p`, `phoneme`, `prosody`, `say-as`, `sub`, and `s`.
456456

457457
Because prosodic attribute values can vary over a wide range, the speech recognizer interprets the assigned values as a suggestion of what the actual prosodic values of the selected voice should be. The text-to-speech service limits or substitutes values that are not supported. Examples of unsupported values are a pitch of 1 MHz or a volume of 120.
458458

articles/data-factory/continuous-integration-deployment.md

Lines changed: 14 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -361,7 +361,13 @@ if ($predeployment -eq $true) {
361361
Write-Host "Stopping deployed triggers"
362362
$triggerstostop | ForEach-Object {
363363
Write-host "Disabling trigger " $_
364-
Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
364+
Remove-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
365+
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_
366+
while ($status.Status -ne "Disabled"){
367+
Start-Sleep -s 15
368+
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_
369+
}
370+
Stop-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
365371
}
366372
}
367373
else {
@@ -454,7 +460,13 @@ else {
454460
Write-Host "Starting active triggers"
455461
$activeTriggerNames | ForEach-Object {
456462
Write-host "Enabling trigger " $_
457-
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
463+
Add-AzDataFactoryV2TriggerSubscription -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
464+
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_
465+
while ($status.Status -ne "Enabled"){
466+
Start-Sleep -s 15
467+
$status = Get-AzDataFactoryV2TriggerSubscriptionStatus -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_
468+
}
469+
Start-AzDataFactoryV2Trigger -ResourceGroupName $ResourceGroupName -DataFactoryName $DataFactoryName -Name $_ -Force
458470
}
459471
}
460472
```

0 commit comments

Comments
 (0)