Skip to content

Commit ba5ee73

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into heidist-search
2 parents ff7fd09 + 184ae40 commit ba5ee73

19 files changed

+73
-36
lines changed

articles/aks/gpu-cluster.md

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -51,7 +51,7 @@ Get the credentials for your AKS cluster using the [az aks get-credentials][az-a
5151
az aks get-credentials --resource-group myResourceGroup --name myAKSCluster
5252
```
5353

54-
## Install nVidia drivers
54+
## Install NVIDIA drivers
5555

5656
Before the GPUs in the nodes can be used, you must deploy a DaemonSet for the NVIDIA device plugin. This DaemonSet runs a pod on each node to provide the required drivers for the GPUs.
5757

@@ -64,12 +64,15 @@ kubectl create namespace gpu-resources
6464
Create a file named *nvidia-device-plugin-ds.yaml* and paste the following YAML manifest. This manifest is provided as part of the [NVIDIA device plugin for Kubernetes project][nvidia-github].
6565

6666
```yaml
67-
apiVersion: extensions/v1beta1
67+
apiVersion: apps/v1
6868
kind: DaemonSet
6969
metadata:
7070
name: nvidia-device-plugin-daemonset
7171
namespace: gpu-resources
7272
spec:
73+
selector:
74+
matchLabels:
75+
name: nvidia-device-plugin-ds
7376
updateStrategy:
7477
type: RollingUpdate
7578
template:
@@ -106,7 +109,7 @@ spec:
106109
path: /var/lib/kubelet/device-plugins
107110
```
108111
109-
Now use the [kubectl apply][kubectl-apply] command to create the DaemonSet and confirm the nVidia device plugin is created successfully, as shown in the following example output:
112+
Now use the [kubectl apply][kubectl-apply] command to create the DaemonSet and confirm the NVIDIA device plugin is created successfully, as shown in the following example output:
110113
111114
```console
112115
$ kubectl apply -f nvidia-device-plugin-ds.yaml
@@ -184,7 +187,7 @@ To see the GPU in action, schedule a GPU-enabled workload with the appropriate r
184187
Create a file named *samples-tf-mnist-demo.yaml* and paste the following YAML manifest. The following job manifest includes a resource limit of `nvidia.com/gpu: 1`:
185188

186189
> [!NOTE]
187-
> If you receive a version mismatch error when calling into drivers, such as, CUDA driver version is insufficient for CUDA runtime version, review the nVidia driver matrix compatibility chart - [https://docs.nvidia.com/deploy/cuda-compatibility/index.html](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
190+
> If you receive a version mismatch error when calling into drivers, such as, CUDA driver version is insufficient for CUDA runtime version, review the NVIDIA driver matrix compatibility chart - [https://docs.nvidia.com/deploy/cuda-compatibility/index.html](https://docs.nvidia.com/deploy/cuda-compatibility/index.html)
188191
189192
```yaml
190193
apiVersion: batch/v1

articles/aks/windows-node-limitations.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ AKS clusters with Windows node pools must use the Azure CNI (advanced) networkin
5050

5151
## Can I change the max. # of pods per node?
5252

53-
It is currently a requirement to be set to a maximum of 30 pods to ensure the reliability of your clusters.
53+
Yes. For the implications and options that are available, see [Maximum number of pods][maximum-number-of-pods].
5454

5555
## How do patch my Windows nodes?
5656

@@ -118,3 +118,4 @@ To get started with Windows Server containers in AKS, [create a node pool that r
118118
[nodepool-limitations]: use-multiple-node-pools.md#limitations
119119
[preview-support]: support-policies.md#preview-features-or-feature-flags
120120
[windows-container-compat]: /virtualization/windowscontainers/deploy-containers/version-compatibility?tabs=windows-server-2019%2Cwindows-10-1909
121+
[maximum-number-of-pods]: configure-azure-cni.md#maximum-pods-per-node

articles/app-service-mobile/app-service-mobile-windows-store-dotnet-get-started-push.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ If you do not use the downloaded quick start server project, you will need the p
2424

2525
## Register your app for push notifications
2626

27-
You need to submit your app to the Microsoft Store, then configure your server project to integrate with [Windows Notification Services (WNS)](https://docs.microsoft.com/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) to send push.
27+
You need to submit your app to the Microsoft Store, then configure your server project to integrate with [Windows Push Notification Services (WNS)](https://docs.microsoft.com/windows/uwp/design/shell/tiles-and-notifications/windows-push-notification-services--wns--overview) to send push.
2828

2929
1. In Visual Studio Solution Explorer, right-click the UWP app project, click **Store** > **Associate App with the Store...**.
3030

articles/automation/shared-resources/variables.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ When you create a variable with the Azure portal, you must specify a data type f
4141

4242
The variable isn't restricted to the designated data type. You must set the variable using Windows PowerShell if you want to specify a value of a different type. If you indicate `Not defined`, the value of the variable is set to Null, and you must set the value with the [Set-AzAutomationVariable](https://docs.microsoft.com/powershell/module/az.automation/set-azautomationvariable?view=azps-3.5.0) cmdlet or the `Set-AutomationVariable` activity.
4343

44-
You can't use the portal to create or change the value for a complex variable type. However, you can provide a value of any type using Windows PowerShell. Complex types are retrieved as a [PSCustomObject](/dotnet/api/system.management.automation.pscustomobject).
44+
You can't use the Azure portal to create or change the value for a complex variable type. However, you can provide a value of any type using Windows PowerShell. Complex types are retrieved as a [PSCustomObject](/dotnet/api/system.management.automation.pscustomobject).
4545

4646
You can store multiple values to a single variable by creating an array or hashtable and saving it to the variable.
4747

@@ -71,8 +71,8 @@ The activities in the following table are used to access variables in runbooks a
7171
Note that `Get-AutomationVariable` does not work in PowerShell, but only in a runbook or DSC configuration. For example, to see the value of an encrypted variable, you might create a runbook to get the variable and then write it to the output stream:
7272

7373
```powershell
74-
$testEncryptVar = Get-AutomationVariable -Name TestVariable
75-
Write-output "The encrypted variable value is: $testEncryptVar"
74+
$mytestencryptvar = Get-AutomationVariable -Name TestVariable
75+
Write-output "The encrypted value of the variable is: $mytestencryptvar"
7676
```
7777

7878
## Functions to access variables in Python 2 runbooks

articles/azure-monitor/app/api-filtering-sampling.md

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -376,6 +376,14 @@ You can add as many initializers as you like, and they are called in the order t
376376

377377
Telemetry processors in OpenCensus Python are simply callback functions called to process telemetry before they are exported. The callback function must accept an [envelope](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/protocol.py#L86) data type as its parameter. To filter out telemetry from being exported,make sure the callback function returns `False`. You can see the schema for Azure Monitor data types in the envelopes [here](https://github.com/census-instrumentation/opencensus-python/blob/master/contrib/opencensus-ext-azure/opencensus/ext/azure/common/protocol.py).
378378

379+
> [!NOTE]
380+
> You can modify the `cloud_RoleName` by changing the `ai.cloud.role` attribute in the `tags` field.
381+
382+
```python
383+
def callback_function(envelope):
384+
envelope.tags['ai.cloud.role'] = 'new_role_name.py'
385+
```
386+
379387
```python
380388
# Example for log exporter
381389
import logging

articles/cognitive-services/Custom-Vision-Service/node-tutorial.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -47,8 +47,8 @@ Add the following code to your script to create a new Custom Vision service proj
4747
```javascript
4848
const util = require('util');
4949
const fs = require('fs');
50-
const TrainingApiClient = require("@azure/cognitiveservices-customvision-training");
51-
const PredictionApiClient = require("@azure/cognitiveservices-customvision-prediction");
50+
const TrainingApi = require("@azure/cognitiveservices-customvision-training");
51+
const PredictionApi = require("@azure/cognitiveservices-customvision-prediction");
5252

5353
const setTimeoutPromise = util.promisify(setTimeout);
5454

@@ -61,7 +61,7 @@ const endPoint = "https://<my-resource-name>.cognitiveservices.azure.com/"
6161

6262
const publishIterationName = "classifyModel";
6363

64-
const trainer = new TrainingApiClient(trainingKey, endPoint);
64+
const trainer = new TrainingApi.TrainingAPIClient(trainingKey, endPoint);
6565

6666
(async () => {
6767
console.log("Creating project...");
@@ -129,7 +129,7 @@ await trainer.publishIteration(sampleProject.id, trainingIteration.id, publishIt
129129
To send an image to the prediction endpoint and retrieve the prediction, add the following code to the end of the file:
130130
131131
```javascript
132-
const predictor = new PredictionApiClient(predictionKey, endPoint);
132+
const predictor = new PredictionApi.PredictionAPIClient(predictionKey, endPoint);
133133
const testFile = fs.readFileSync(`${sampleDataRoot}/Test/test_image.jpg`);
134134

135135
const results = await predictor.classifyImage(sampleProject.id, publishIterationName, testFile);

articles/cognitive-services/Speech-Service/how-to-async-conversation-transcription.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,8 @@ Conversation conversation = conversationFuture.get();
5151

5252
// Create an audio stream from a wav file or from the default microphone if you want to stream live audio from the supported devices
5353
// Replace with your own audio file name and Helper class which implements AudioConfig using PullAudioInputStreamCallback
54-
PullAudioInputStreamCallback wavfilePullStreamCallback = Helper.OpenWavFile("16Khz16Bits8channelsOfRecordedPCMAudio.wav");
55-
// Create an audio stream format assuming the file used above is 16Khz, 16 bits and 8 channel pcm wav file
54+
PullAudioInputStreamCallback wavfilePullStreamCallback = Helper.OpenWavFile("16kHz16Bits8channelsOfRecordedPCMAudio.wav");
55+
// Create an audio stream format assuming the file used above is 16kHz, 16 bits and 8 channel pcm wav file
5656
AudioStreamFormat audioStreamFormat = AudioStreamFormat.getWaveFormatPCM((long)16000, (short)16,(short)8);
5757
// Create an input stream
5858
AudioInputStream audioStream = AudioInputStream.createPullStream(wavfilePullStreamCallback, audioStreamFormat);

articles/cognitive-services/Speech-Service/how-to-migrate-from-bing-speech.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ The Speech service is largely similar to Bing Speech, with the following differe
4444
| Partial or interim results | :heavy_check_mark: | :heavy_check_mark: | With WebSockets protocol or SDK. |
4545
| Custom speech models | :heavy_check_mark: | :heavy_check_mark: | Bing Speech requires a separate Custom Speech subscription. |
4646
| Custom voice fonts | :heavy_check_mark: | :heavy_check_mark: | Bing Speech requires a separate Custom Voice subscription. |
47-
| 24-KHz voices | :heavy_minus_sign: | :heavy_check_mark: |
47+
| 24-kHz voices | :heavy_minus_sign: | :heavy_check_mark: |
4848
| Speech intent recognition | Requires separate LUIS API call | Integrated (with SDK) | You can use a LUIS key with the Speech service. |
4949
| Simple intent recognition | :heavy_minus_sign: | :heavy_check_mark: |
5050
| Batch transcription of long audio files | :heavy_minus_sign: | :heavy_check_mark: |

articles/cognitive-services/Speech-Service/includes/supported-audio-formats.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ ms.date: 03/16/2020
66
ms.author: dapine
77
---
88

9-
The default audio streaming format is WAV (16 KHz or 8Khz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported. [Additional configuration](../how-to-use-codec-compressed-audio-input-streams.md) is needed to enable the formats listed below.
9+
The default audio streaming format is WAV (16kHz or 8kHz, 16-bit, and mono PCM). Outside of WAV / PCM, the compressed input formats listed below are also supported. [Additional configuration](../how-to-use-codec-compressed-audio-input-streams.md) is needed to enable the formats listed below.
1010

1111
- MP3
1212
- OPUS/OGG

articles/cognitive-services/Speech-Service/quickstart-python-text-to-speech.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,7 +94,7 @@ Next, you need to add required headers for the request. Make sure that you updat
9494
Then construct the request body using Speech Synthesis Markup Language (SSML). This sample defines the structure, and uses the `tts` input you created earlier.
9595

9696
>[!NOTE]
97-
> This sample uses the `Guy24KRUS` voice font. For a complete list of Microsoft provided voices/languages, see [Language support](language-support.md).
97+
> This sample uses the `Guy24kRUS` voice font. For a complete list of Microsoft provided voices/languages, see [Language support](language-support.md).
9898
> If you're interested in creating a unique, recognizable voice for your brand, see [Creating custom voice fonts](how-to-customize-voice-font.md).
9999
100100
Finally, you'll make a request to the service. If the request is successful, and a 200 status code is returned, the speech response is written to a timestamped file.
@@ -117,7 +117,7 @@ def save_audio(self):
117117
voice = ElementTree.SubElement(xml_body, 'voice')
118118
voice.set('{http://www.w3.org/XML/1998/namespace}lang', 'en-US')
119119
voice.set(
120-
'name', 'Microsoft Server Speech Text to Speech Voice (en-US, Guy24KRUS)')
120+
'name', 'Microsoft Server Speech Text to Speech Voice (en-US, Guy24kRUS)')
121121
voice.text = self.tts
122122
body = ElementTree.tostring(xml_body)
123123

0 commit comments

Comments
 (0)