Skip to content

Commit 98e704d

Browse files
authored
Merge pull request #209936 from MicrosoftDocs/repo_sync_working_branch
Confirm merge from repo_sync_working_branch to main to sync with https://github.com/MicrosoftDocs/azure-docs (branch main)
2 parents 8f6cab0 + 2180dde commit 98e704d

File tree

6 files changed

+156
-130
lines changed

6 files changed

+156
-130
lines changed

articles/active-directory/enterprise-users/licensing-service-plan-reference.md

Lines changed: 109 additions & 109 deletions
Large diffs are not rendered by default.

articles/active-directory/saas-apps/ms-confluence-jira-plugin-adminguide.md

Lines changed: 3 additions & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -59,23 +59,10 @@ Note the following information before you install the plug-in:
5959

6060
The plug-in supports the following versions of Jira and Confluence:
6161

62-
* Jira Core and Software: 6.0 to 7.12
63-
* Jira Service Desk: 3.0.0 to 3.5.0
62+
* Jira Core and Software: 6.0 to 8.22.1
63+
* Jira Service Desk: 3.0.0 to 4.22.1
6464
* JIRA also supports 5.2. For more details, click [Microsoft Azure Active Directory single sign-on for JIRA 5.2](./jira52microsoft-tutorial.md)
65-
* Confluence: 5.0 to 5.10
66-
* Confluence: 6.0.1
67-
* Confluence: 6.1.1
68-
* Confluence: 6.2.1
69-
* Confluence: 6.3.4
70-
* Confluence: 6.4.0
71-
* Confluence: 6.5.0
72-
* Confluence: 6.6.2
73-
* Confluence: 6.7.0
74-
* Confluence: 6.8.1
75-
* Confluence: 6.9.0
76-
* Confluence: 6.10.0
77-
* Confluence: 6.11.0
78-
* Confluence: 6.12.0
65+
* Confluence: 5.0 to 7.17.0
7966

8067
## Installation
8168

articles/sentinel/threat-intelligence-integration.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -62,6 +62,9 @@ To connect to TAXII threat intelligence feeds, follow the instructions to [conne
6262

6363
- [Learn about Kaspersky integration with Microsoft Sentinel](https://support.kaspersky.com/15908)
6464

65+
### PickupSTIX
66+
67+
- [Fill out this web form](https://www.celerium.com/pickupstix) to get the API Root, Collection IDs, Username, and Password for the free TAXII 2.1 Feeds on the PickupSTIX TAXII Server.
6568

6669
### Pulsedive
6770

articles/synapse-analytics/spark/apache-spark-autoscale.md

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,12 @@ Apache Spark enables configuration of Dynamic Allocation of Executors through co
7474
```
7575
The defaults specified through the code override the values set through the user interface.
7676

77-
On enabling Dynamic allocation, Executors scale up or down based on the utilization of the Executors. This ensure that the Executors are provisioned in accordance with the needs of the job being run.
77+
In this example, if your job requires only 2 executors, it will use only 2 executors. When the job requires more, it will scale up to 6 executors (1 driver, 6 executors). When the job doesn't need the executors, then it will decommission the executors. If it doesn't need the node, it will free up the node.
78+
79+
>[!NOTE]
80+
>The maxExecutors will reserve the number of executors configured. Considering the example, even if you use only 2, it will reserve 6.
81+
82+
Hence, on enabling Dynamic allocation, Executors scale up or down based on the utilization of the Executors. This ensures that the Executors are provisioned in accordance with the needs of the job being run.
7883

7984
## Best practices
8085

43.7 KB
Loading

articles/synapse-analytics/spark/spark-dotnet.md

Lines changed: 35 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -23,18 +23,43 @@ You can analyze data with .NET for Apache Spark through Spark batch job definiti
2323

2424
Visit the tutorial to learn how to use Azure Synapse Analytics to [create Apache Spark job definitions for Synapse Spark pools](apache-spark-job-definitions.md). If you haven't packaged your app to submit to Azure Synapse, complete the following steps.
2525

26-
1. Run the following commands to publish your app. Be sure to replace *mySparkApp* with the path to your app.
26+
1. Configure your `dotnet` application dependencies for compatibility with Synapse Spark.
27+
The required .NET Spark version will be noted in the Synapse Studio interface under your Apache Spark Pool configuration, under the Manage toolbox.
28+
29+
:::image type="content" source="./media/apache-spark-job-definitions/net-spark-workspace-compatibility.png" alt-text="Screenshot that shows properties, including the .NET Spark version.":::
30+
31+
Create your project as a .NET console application that outputs an Ubuntu x86 executable.
32+
33+
```
34+
<Project Sdk="Microsoft.NET.Sdk">
35+
36+
<PropertyGroup>
37+
<OutputType>Exe</OutputType>
38+
<TargetFramework>netcoreapp3.1</TargetFramework>
39+
</PropertyGroup>
40+
41+
<ItemGroup>
42+
<PackageReference Include="Microsoft.Spark" Version="2.1.0" />
43+
</ItemGroup>
44+
45+
</Project>
46+
```
47+
48+
2. Run the following commands to publish your app. Be sure to replace *mySparkApp* with the path to your app.
2749

2850
```dotnetcli
2951
cd mySparkApp
3052
dotnet publish -c Release -f netcoreapp3.1 -r ubuntu.18.04-x64
3153
```
3254

33-
2. Zip the contents of the publish folder, `publish.zip` for example, that was created as a result of Step 1. All the assemblies should be in the first layer of the ZIP file and there should be no intermediate folder layer. This means when you unzip `publish.zip`, all assemblies are extracted into your current working directory.
55+
3. Zip the contents of the publish folder, `publish.zip` for example, that was created as a result of Step 1. All the assemblies should be in the root of the ZIP file and there should be no intermediate folder layer. This means when you unzip `publish.zip`, all assemblies are extracted into your current working directory.
3456

3557
**On Windows:**
3658

37-
Use an extraction program, like [7-Zip](https://www.7-zip.org/) or [WinZip](https://www.winzip.com/), to extract the file into the bin directory with all the published binaries.
59+
Using Windows PowerShell or PowerShell 7, create a .zip from the contents of your publish directory.
60+
```PowerShell
61+
Compress-Archive publish/* publish.zip -Update
62+
```
3863
3964
**On Linux:**
4065
@@ -48,7 +73,7 @@ Visit the tutorial to learn how to use Azure Synapse Analytics to [create Apache
4873
4974
Notebooks are a great option for prototyping your .NET for Apache Spark pipelines and scenarios. You can start working with, understanding, filtering, displaying, and visualizing your data quickly and efficiently.
5075
51-
Data engineers, data scientists, business analysts, and machine learning engineers are all able to collaborate over a shared, interactive document. You see immediate results from data exploration, and can visualize your data in the same notebook.
76+
Data engineers, data scientists, business analysts, and machine learning engineers are all able to collaborate over a shared, interactive document. You see immediate results from data exploration, and can visualize your data in the same notebook.
5277
5378
### How to use .NET for Apache Spark notebooks
5479
@@ -79,6 +104,12 @@ The following features are available when you use .NET for Apache Spark in the A
79104
* Support for defining [.NET user-defined functions that can run within Apache Spark](/dotnet/spark/how-to-guides/udf-guide). We recommend [Write and call UDFs in .NET for Apache Spark Interactive environments](/dotnet/spark/how-to-guides/dotnet-interactive-udf-issue) for learning how to use UDFs in .NET for Apache Spark Interactive experiences.
80105
* Support for visualizing output from your Spark jobs using different charts (such as line, bar, or histogram) and layouts (such as single, overlaid, and so on) using the `XPlot.Plotly` library.
81106
* Ability to include NuGet packages into your C# notebook.
107+
## Troubleshooting
108+
109+
### `DotNetRunner: null` / `Futures timeout` in Synapse Spark Job Definition Run
110+
Synapse Spark Job Definitions on Spark Pools using Spark 2.4 require `Microsoft.Spark` 1.0.0. Clear your `bin` and `obj` directories, and publish the project using 1.0.0.
111+
### OutOfMemoryError: java heap space at org.apache.spark...
112+
Dotnet Spark 1.0.0 uses a different debug architecture than 1.1.1+. You will have to use 1.0.0 for your published version and 1.1.1+ for local debugging.
82113
83114
## Next steps
84115

0 commit comments

Comments
 (0)