Skip to content

Commit 491aed0

Browse files
authored
Merge pull request #86962 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to master to sync with https://github.com/Microsoft/azure-docs (branch master)
2 parents 97d2c9d + c883a03 commit 491aed0

13 files changed

+146
-129
lines changed

articles/active-directory/saas-apps/sharepoint-on-premises-tutorial.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -272,7 +272,7 @@ The objective of this section is to create a test user in the Azure portal calle
272272
> [!NOTE]
273273
> Please note that AzureCP is not a Microsoft product or supported by Microsoft Technical Support. Download, install and configure AzureCP on the on-premises SharePoint farm per https://yvand.github.io/AzureCP/
274274
275-
11. **Grant access to the Azure Active Directory Security Group in the on-premise SharePoint** :- The groups must be granted access to the application in SharePoint on-premises. Use the following steps to set the permissions to access the web application.
275+
11. **Grant access to the Azure Active Directory Security Group in the on-premises SharePoint** :- The groups must be granted access to the application in SharePoint on-premises. Use the following steps to set the permissions to access the web application.
276276

277277
12. In Central Administration, click on Application Management, Manage web applications, then select the web application to activate the ribbon and click on User Policy.
278278

articles/aks/use-multiple-node-pools.md

Lines changed: 60 additions & 60 deletions
Original file line numberDiff line numberDiff line change
@@ -482,68 +482,68 @@ Edit these values as need to update, add, or delete node pools as needed:
482482

483483
```json
484484
{
485-
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
486-
"contentVersion": "1.0.0.0",
487-
"parameters": {
488-
"clusterName": {
489-
"type": "string",
490-
"metadata": {
491-
"description": "The name of your existing AKS cluster."
492-
}
485+
"$schema": "https://schema.management.azure.com/schemas/2015-01-01/deploymentTemplate.json#",
486+
"contentVersion": "1.0.0.0",
487+
"parameters": {
488+
"clusterName": {
489+
"type": "string",
490+
"metadata": {
491+
"description": "The name of your existing AKS cluster."
492+
}
493+
},
494+
"location": {
495+
"type": "string",
496+
"metadata": {
497+
"description": "The location of your existing AKS cluster."
498+
}
499+
},
500+
"agentPoolName": {
501+
"type": "string",
502+
"defaultValue": "myagentpool",
503+
"metadata": {
504+
"description": "The name of the agent pool to create or update."
505+
}
506+
},
507+
"vnetSubnetId": {
508+
"type": "string",
509+
"defaultValue": "",
510+
"metadata": {
511+
"description": "The Vnet subnet resource ID for your existing AKS cluster."
512+
}
513+
}
493514
},
494-
"location": {
495-
"type": "string",
496-
"metadata": {
497-
"description": "The location of your existing AKS cluster."
498-
}
515+
"variables": {
516+
"apiVersion": {
517+
"aks": "2019-04-01"
518+
},
519+
"agentPoolProfiles": {
520+
"maxPods": 30,
521+
"osDiskSizeGB": 0,
522+
"agentCount": 3,
523+
"agentVmSize": "Standard_DS2_v2",
524+
"osType": "Linux",
525+
"vnetSubnetId": "[parameters('vnetSubnetId')]"
526+
}
499527
},
500-
"agentPoolName": {
501-
"type": "string",
502-
"defaultValue": "myagentpool",
503-
"metadata": {
504-
"description": "The name of the agent pool to create or update."
505-
}
506-
},
507-
"vnetSubnetId": {
508-
"type": "string",
509-
"defaultValue": "",
510-
"metadata": {
511-
"description": "The Vnet subnet resource ID for your existing AKS cluster."
512-
}
513-
}
514-
},
515-
"variables": {
516-
"apiVersion": {
517-
"aks": "2019-04-01"
518-
},
519-
"agentPoolProfiles": {
520-
"maxPods": 30,
521-
"osDiskSizeGB": 0,
522-
"agentCount": 3,
523-
"agentVmSize": "Standard_DS2_v2",
524-
"osType": "Linux",
525-
"vnetSubnetId": "[parameters('vnetSubnetId')]"
526-
}
527-
},
528-
"resources": [
529-
{
530-
"apiVersion": "2019-04-01",
531-
"type": "Microsoft.ContainerService/managedClusters/agentPools",
532-
"name": "[concat(parameters('clusterName'),'/', parameters('agentPoolName'))]",
533-
"location": "[parameters('location')]",
534-
"properties": {
535-
"maxPods": "[variables('agentPoolProfiles').maxPods]",
536-
"osDiskSizeGB": "[variables('agentPoolProfiles').osDiskSizeGB]",
537-
"count": "[variables('agentPoolProfiles').agentCount]",
538-
"vmSize": "[variables('agentPoolProfiles').agentVmSize]",
539-
"osType": "[variables('agentPoolProfiles').osType]",
540-
"storageProfile": "ManagedDisks",
541-
"type": "VirtualMachineScaleSets",
542-
"vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
543-
"orchestratorVersion": "1.13.10"
544-
}
545-
}
546-
]
528+
"resources": [
529+
{
530+
"apiVersion": "2019-04-01",
531+
"type": "Microsoft.ContainerService/managedClusters/agentPools",
532+
"name": "[concat(parameters('clusterName'),'/', parameters('agentPoolName'))]",
533+
"location": "[parameters('location')]",
534+
"properties": {
535+
"maxPods": "[variables('agentPoolProfiles').maxPods]",
536+
"osDiskSizeGB": "[variables('agentPoolProfiles').osDiskSizeGB]",
537+
"count": "[variables('agentPoolProfiles').agentCount]",
538+
"vmSize": "[variables('agentPoolProfiles').agentVmSize]",
539+
"osType": "[variables('agentPoolProfiles').osType]",
540+
"storageProfile": "ManagedDisks",
541+
"type": "VirtualMachineScaleSets",
542+
"vnetSubnetID": "[variables('agentPoolProfiles').vnetSubnetId]",
543+
"orchestratorVersion": "1.13.10"
544+
}
545+
}
546+
]
547547
}
548548
```
549549

articles/data-explorer/spark-connector.md

Lines changed: 57 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -86,7 +86,7 @@ For more information, see [connector usage](https://github.com/Azure/azure-kusto
8686

8787
![Import Azure Data Explorer library](media/spark-connector/db-create-library.png)
8888

89-
1. Add additional dependencies:
89+
1. Add additional dependencies (not necessary if used from maven) :
9090

9191
![Add dependencies](media/spark-connector/db-dependencies.png)
9292

@@ -130,7 +130,7 @@ For more information on Azure Data Explorer principal roles, see [role-based aut
130130

131131
val appId = KustoSparkTestAppId
132132
val appKey = KustoSparkTestAppKey
133-
val authorityId = "72f988bf-86f1-41af-91ab-2d7cd011db47"
133+
val authorityId = "72f988bf-86f1-41af-91ab-2d7cd011db47" // Optional - defaults to microsoft.com
134134
val cluster = "Sparktest.eastus2"
135135
val database = "TestDb"
136136
val table = "StringAndIntTable"
@@ -139,61 +139,84 @@ For more information on Azure Data Explorer principal roles, see [role-based aut
139139
1. Write Spark DataFrame to Azure Data Explorer cluster as batch:
140140

141141
```scala
142+
import com.microsoft.kusto.spark.datasink.KustoSinkOptions
143+
val conf = Map(
144+
KustoSinkOptions.KUSTO_CLUSTER -> cluster,
145+
KustoSinkOptions.KUSTO_TABLE -> table,
146+
KustoSinkOptions.KUSTO_DATABASE -> database,
147+
KustoSinkOptions.KUSTO_AAD_CLIENT_ID -> appId,
148+
KustoSinkOptions.KUSTO_AAD_CLIENT_PASSWORD -> appKey,
149+
KustoSinkOptions.KUSTO_AAD_AUTHORITY_ID -> authorityId)
150+
142151
df.write
143152
.format("com.microsoft.kusto.spark.datasource")
144-
.option(KustoOptions.KUSTO_CLUSTER, cluster)
145-
.option(KustoOptions.KUSTO_DATABASE, database)
146-
.option(KustoOptions.KUSTO_TABLE, table)
147-
.option(KustoOptions.KUSTO_AAD_CLIENT_ID, appId)
148-
.option(KustoOptions.KUSTO_AAD_CLIENT_PASSWORD, appKey)
149-
.option(KustoOptions.KUSTO_AAD_AUTHORITY_ID, authorityId)
153+
.options(conf)
150154
.save()
155+
151156
```
152-
157+
158+
Or use the simplified syntax:
159+
160+
```scala
161+
import com.microsoft.kusto.spark.datasink.SparkIngestionProperties
162+
import com.microsoft.kusto.spark.sql.extension.SparkExtension._
163+
164+
val sparkIngestionProperties = Some(new SparkIngestionProperties()) // Optional, use None if not needed
165+
df.write.kusto(cluster, database, table, conf, sparkIngestionProperties)
166+
```
167+
153168
1. Write streaming data:
154169

155170
```scala
156171
import org.apache.spark.sql.streaming.Trigger
157172
import java.util.concurrent.TimeUnit
158-
173+
import java.util.concurrent.TimeUnit
174+
import org.apache.spark.sql.streaming.Trigger
175+
159176
// Set up a checkpoint and disable codeGen. Set up a checkpoint and disable codeGen as a workaround for an known issue 
160177
spark.conf.set("spark.sql.streaming.checkpointLocation", "/FileStore/temp/checkpoint")
161-
spark.conf.set("spark.sql.codegen.wholeStage","false")
178+
spark.conf.set("spark.sql.codegen.wholeStage","false") // Use in case a NullPointerException is thrown inside codegen iterator
162179

163-
// Write to a Kusto table fro streaming source
164-
val kustoQ = csvDf
180+
// Write to a Kusto table from a streaming source
181+
val kustoQ = df
165182
.writeStream
166183
.format("com.microsoft.kusto.spark.datasink.KustoSinkProvider")
167-
.options(Map(
168-
KustoOptions.KUSTO_CLUSTER -> cluster,
169-
KustoOptions.KUSTO_TABLE -> table,
170-
KustoOptions.KUSTO_DATABASE -> database,
171-
KustoOptions.KUSTO_AAD_CLIENT_ID -> appId,
172-
KustoOptions.KUSTO_AAD_CLIENT_PASSWORD -> appKey,
173-
KustoOptions.KUSTO_AAD_AUTHORITY_ID -> authorityId))
174-
.trigger(Trigger.Once)
184+
.options(conf)
185+
.option(KustoSinkOptions.KUSTO_WRITE_ENABLE_ASYNC, "true") // Optional, better for streaming, harder to handle errors
186+
.trigger(Trigger.ProcessingTime(TimeUnit.SECONDS.toMillis(10))) // Sync this with the ingestionBatching policy of the database
187+
.start()
175188

176-
kustoQ.start().awaitTermination(TimeUnit.MINUTES.toMillis(8))
177189
```
178190

179191
## Spark source: Reading from Azure Data Explorer
180192

181193
1. When reading small amounts of data, define the data query:
182194

183195
```scala
196+
import com.microsoft.kusto.spark.datasource.KustoSourceOptions
197+
import org.apache.spark.SparkConf
198+
import org.apache.spark.sql._
199+
import com.microsoft.azure.kusto.data.ClientRequestProperties
200+
201+
val query = s"$table | where (ColB % 1000 == 0) | distinct ColA"
184202
val conf: Map[String, String] = Map(
185-
KustoOptions.KUSTO_AAD_CLIENT_ID -> appId,
186-
KustoOptions.KUSTO_AAD_CLIENT_PASSWORD -> appKey,
187-
KustoOptions.KUSTO_QUERY -> s"$table | where (ColB % 1000 == 0) | distinct ColA"
203+
KustoSourceOptions.KUSTO_AAD_CLIENT_ID -> appId,
204+
KustoSourceOptions.KUSTO_AAD_CLIENT_PASSWORD -> appKey
188205
)
189-
206+
207+
val df = spark.read.format("com.microsoft.kusto.spark.datasource").
208+
options(conf).
209+
option(KustoSourceOptions.KUSTO_QUERY, query).
210+
option(KustoSourceOptions.KUSTO_DATABASE, database).
211+
option(KustoSourceOptions.KUSTO_CLUSTER, cluster).
212+
load()
213+
190214
// Simplified syntax flavor
191-
import org.apache.spark.sql._
192215
import com.microsoft.kusto.spark.sql.extension.SparkExtension._
193-
import org.apache.spark.SparkConf
194216

195-
val df = spark.read.kusto(cluster, database, "", conf)
196-
display(df)
217+
val cpr: Option[ClientRequestProperties] = None // Optional
218+
val df2 = spark.read.kusto(cluster, database, query, conf, cpr)
219+
display(df2)
197220
```
198221

199222
1. When reading large amounts of data, transient blob storage must be provided. Provide storage container SAS key, or storage account name, account key, and container name. This step is only required for the current preview release of the Spark connector.
@@ -211,6 +234,10 @@ For more information on Azure Data Explorer principal roles, see [role-based aut
211234
1. Read from Azure Data Explorer:
212235

213236
```scala
237+
val conf3 = Map(
238+
KustoSourceOptions.KUSTO_AAD_CLIENT_ID -> appId,
239+
KustoSourceOptions.KUSTO_AAD_CLIENT_PASSWORD -> appKey
240+
KustoSourceOptions.KUSTO_BLOB_STORAGE_SAS_URL -> storageSas)
214241
val df2 = spark.read.kusto(cluster, database, "ReallyBigTable", conf3)
215242

216243
val dfFiltered = df2

articles/data-lake-analytics/data-lake-analytics-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ Data Lake Analytics is a cost-effective solution for running big data workloads.
3737

3838
### Works with all your Azure data
3939

40-
Data Lake Analytics works with Azure Data Lake Store for the highest performance, throughput, and parallelization and works with Azure Storage blobs, Azure SQL Database, Azure Warehouse.
40+
Data Lake Analytics works with Azure Data Lake Storage for the highest performance, throughput, and parallelization and works with Azure Storage blobs, Azure SQL Database, Azure Warehouse.
4141

4242
### Next steps
4343

articles/iot-hub/iot-hub-device-sdk-platform-support.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -66,10 +66,10 @@ There are several platforms supported.
6666

6767
### Node.js SDK
6868

69-
| OS | Arch | Node version |
70-
|----------------------------------------------|------|--------------|
71-
| Ubuntu 16.04 LTS (using node 6 docker image) | X64 | Node 6 |
72-
| Windows Server 2016 | X64 | Node 6 |
69+
| OS | Arch | Node version |
70+
|----------------------------------------------|------|-----------------|
71+
| Ubuntu 16.04 LTS (using node 6 docker image) | X64 | LTS and Current |
72+
| Windows Server 2016 | X64 | LTS and Current |
7373

7474
### Java SDK
7575

articles/network-watcher/network-watcher-alert-triggered-packet-capture.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -301,8 +301,7 @@ The following example is PowerShell code that can be used in the function. There
301301
Write-Output ("Resource Type: {0}" -f $requestBody.context.resourceType)
302302
303303
#Get the Network Watcher in the VM's region
304-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $requestBody.context.resourceRegion}
305-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
304+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $requestBody.context.resourceRegion}
306305
307306
#Get existing packetCaptures
308307
$packetCaptures = Get-AzNetworkWatcherPacketCapture -NetworkWatcher $networkWatcher

articles/network-watcher/network-watcher-connectivity-powershell.md

Lines changed: 4 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -53,8 +53,7 @@ $RG = Get-AzResourceGroup -Name $rgName
5353
$VM1 = Get-AzVM -ResourceGroupName $rgName | Where-Object -Property Name -EQ $sourceVMName
5454
$VM2 = Get-AzVM -ResourceGroupName $rgName | Where-Object -Property Name -EQ $destVMName
5555
56-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location}
57-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
56+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location}
5857
5958
Test-AzNetworkWatcherConnectivity -NetworkWatcher $networkWatcher -SourceId $VM1.Id -DestinationId $VM2.Id -DestinationPort 80
6059
```
@@ -145,8 +144,7 @@ $sourceVMName = "MultiTierApp0"
145144
$RG = Get-AzResourceGroup -Name $rgName
146145
$VM1 = Get-AzVM -ResourceGroupName $rgName | Where-Object -Property Name -EQ $sourceVMName
147146
148-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
149-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
147+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
150148
151149
Test-AzNetworkWatcherConnectivity -NetworkWatcher $networkWatcher -SourceId $VM1.Id -DestinationAddress 13.107.21.200 -DestinationPort 80
152150
```
@@ -209,8 +207,7 @@ $sourceVMName = "MultiTierApp0"
209207
$RG = Get-AzResourceGroup -Name $rgName
210208
$VM1 = Get-AzVM -ResourceGroupName $rgName | Where-Object -Property Name -EQ $sourceVMName
211209
212-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
213-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
210+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
214211
215212
216213
Test-AzNetworkWatcherConnectivity -NetworkWatcher $networkWatcher -SourceId $VM1.Id -DestinationAddress https://bing.com/
@@ -263,8 +260,7 @@ $RG = Get-AzResourceGroup -Name $rgName
263260
264261
$VM1 = Get-AzVM -ResourceGroupName $rgName | Where-Object -Property Name -EQ $sourceVMName
265262
266-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
267-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
263+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq $VM1.Location }
268264
269265
Test-AzNetworkWatcherConnectivity -NetworkWatcher $networkWatcher -SourceId $VM1.Id -DestinationAddress https://contosostorageexample.blob.core.windows.net/
270266
```

articles/network-watcher/network-watcher-nsg-auditing-powershell.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,8 +126,7 @@ $nsgbaserules = Get-Content -Path C:\temp\testvm1-nsg.json | ConvertFrom-Json
126126
The next step is to retrieve the Network Watcher instance. The `$networkWatcher` variable is passed to the `AzNetworkWatcherSecurityGroupView` cmdlet.
127127

128128
```powershell
129-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
130-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
129+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
131130
```
132131

133132
## Get a VM

articles/network-watcher/network-watcher-packet-capture-manage-powershell.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -126,8 +126,7 @@ Once the preceding steps are complete, the packet capture agent is installed on
126126
The next step is to retrieve the Network Watcher instance. This variable is passed to the `New-AzNetworkWatcherPacketCapture` cmdlet in step 4.
127127

128128
```powershell
129-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
130-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
129+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
131130
```
132131

133132
### Step 2

articles/network-watcher/network-watcher-security-group-view-powershell.md

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -44,8 +44,7 @@ The scenario covered in this article retrieves the configured and effective secu
4444
The first step is to retrieve the Network Watcher instance. This variable is passed to the `Get-AzNetworkWatcherSecurityGroupView` cmdlet.
4545

4646
```powershell
47-
$nw = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
48-
$networkWatcher = Get-AzNetworkWatcher -Name $nw.Name -ResourceGroupName $nw.ResourceGroupName
47+
$networkWatcher = Get-AzResource | Where {$_.ResourceType -eq "Microsoft.Network/networkWatchers" -and $_.Location -eq "WestCentralUS" }
4948
```
5049

5150
## Get a VM

0 commit comments

Comments
 (0)