Skip to content

Commit 1173754

Browse files
authored
Merge pull request #276436 from MicrosoftDocs/main
5/27/2024 PM Publish
2 parents f2164c6 + 8fce4e6 commit 1173754

File tree

2 files changed

+13
-4
lines changed

2 files changed

+13
-4
lines changed

articles/sap/automation/configure-devops.md

Lines changed: 5 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Open PowerShell ISE and copy the following script and update the parameters to m
4444
az login --output none --only-show-errors --scope https://graph.microsoft.com//.default
4545
}
4646
else {
47-
az login --output none --tenant $ARM_TENANT_ID --only-show-errors --scope https://graph.microsoft.com//.default
47+
az login --output none --tenant $Env:ARM_TENANT_ID --only-show-errors --scope https://graph.microsoft.com//.default
4848
}
4949
5050
az config set extension.use_dynamic_install=yes_without_prompt --only-show-errors
@@ -55,7 +55,7 @@ Open PowerShell ISE and copy the following script and update the parameters to m
5555
if ($differentTenant -eq 'y') {
5656
$env:AZURE_DEVOPS_EXT_PAT = Read-Host "Please enter your Personal Access Token (PAT) with permissions to add new projects, manage agent pools to the Azure DevOps organization $Env:ADO_Organization"
5757
try {
58-
az devops login --organization $Env:ADO_Organization
58+
az devops project list
5959
}
6060
catch {
6161
$_
@@ -171,14 +171,16 @@ Open PowerShell ISE and copy the following script and update the parameters to m
171171
New-Item -Path $sdaf_path -Type Directory
172172
}
173173
}
174+
175+
$branchName = "main"
174176
175177
Set-Location -Path $sdaf_path
176178
177179
if ( Test-Path "New-SDAFDevopsWorkloadZone.ps1") {
178180
remove-item .\New-SDAFDevopsWorkloadZone.ps1
179181
}
180182
181-
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/main/deploy/scripts/New-SDAFDevopsWorkloadZone.ps1 -OutFile .\New-SDAFDevopsWorkloadZone.ps1 ; .\New-SDAFDevopsWorkloadZone.ps1
183+
Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/sap-automation/$branchName/deploy/scripts/New-SDAFDevopsWorkloadZone.ps1 -OutFile .\New-SDAFDevopsWorkloadZone.ps1 ; .\New-SDAFDevopsWorkloadZone.ps1
182184
183185
```
184186

articles/synapse-analytics/spark/apache-spark-performance.md

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -57,7 +57,13 @@ Spark provides its own native caching mechanisms, which can be used through diff
5757
Spark operates by placing data in memory, so managing memory resources is a key aspect of optimizing the execution of Spark jobs. There are several techniques you can apply to use your cluster's memory efficiently.
5858

5959
* Prefer smaller data partitions and account for data size, types, and distribution in your partitioning strategy.
60-
* Consider the newer, more efficient [Kryo data serialization](https://github.com/EsotericSoftware/kryo), rather than the default Java serialization.
60+
* In Synapse Spark (Runtime 3.1 or higher), Kryo data serialization is enabled by default Kryo data serialization.
61+
* You can customize the kryoserializer buffer size using Spark configuration based on your workload requirements:
62+
63+
```scala
64+
// Set the desired property
65+
spark.conf.set("spark.kryoserializer.buffer.max", "256m")
66+
6167
* Monitor and tune Spark configuration settings.
6268

6369
For your reference, the Spark memory structure and some key executor memory parameters are shown in the next image.
@@ -172,6 +178,7 @@ MAX(AMOUNT) -> MAX(cast(AMOUNT as DOUBLE))
172178

173179
## Next steps
174180

181+
- [Learn about Azure Synapse runtimes for Apache Spark](./apache-spark-version-support.md)
175182
- [Tuning Apache Spark](https://spark.apache.org/docs/2.4.5/tuning.html)
176183
- [How to Actually Tune Your Apache Spark Jobs So They Work](https://www.slideshare.net/ilganeli/how-to-actually-tune-your-spark-jobs-so-they-work)
177184
- [Kryo Serialization](https://github.com/EsotericSoftware/kryo)

0 commit comments

Comments
 (0)