You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/azure-monitor/platform/alerts-metric-create-templates.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1513,6 +1513,10 @@ This section will describe Azure Resource Manager templates for three scenarios
1513
1513
- Monitoring all virtual machines (in one Azure region) in a subscription.
1514
1514
- Monitoring a list of virtual machines (in one Azure region) in a subscription.
1515
1515
1516
+
> [!NOTE]
1517
+
>
1518
+
> In a metric alert rule that monitors multiple resources, only one condition is allowed.
1519
+
1516
1520
### Static threshold alert on all virtual machines in one or more resource groups
1517
1521
1518
1522
This template will create a static threshold metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in one or more resource groups.
Copy file name to clipboardExpand all lines: articles/azure-monitor/platform/alerts-metric-overview.md
+4Lines changed: 4 additions & 0 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -137,6 +137,10 @@ You can specify the scope of monitoring by a single metric alert rule in one of
137
137
138
138
Creating metric alert rules that monitor multiple resources is like [creating any other metric alert](alerts-metric.md) that monitors a single resource. Only difference is that you would select all the resources you want to monitor. You can also create these rules through [Azure Resource Manager templates](../../azure-monitor/platform/alerts-metric-create-templates.md#template-for-a-metric-alert-that-monitors-multiple-resources). You will receive individual notifications for each monitored resource.
139
139
140
+
> [!NOTE]
141
+
>
142
+
> In a metric alert rule that monitors multiple resources, only one condition is allowed.
143
+
140
144
## Typical latency
141
145
142
146
For metric alerts, typically you will get notified in under 5 minutes if you set the alert rule frequency to be 1 min. In cases of heavy load for notification systems, you might see a longer latency.
Copy file name to clipboardExpand all lines: articles/cognitive-services/LUIS/luis-tutorial-bing-spellcheck.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -66,7 +66,7 @@ The endpoint query needs the key passed in the query string parameters for each
66
66
67
67
The endpoint URL has several values that need to be passed correctly. The Bing Spell Check API v7 key is just another one of these. You must set the **spellCheck** parameter to true and you must set the value of **bing-spell-check-subscription-key** to the key value:
1. In a web browser, copy the preceding string and replace the `region`, `appId`, `luisKey`, and `bingKey` with your own values. Make sure to use the endpoint region, if it is different from your publishing [region](luis-reference-regions.md).
Copy file name to clipboardExpand all lines: articles/event-hubs/event-hubs-availability-and-consistency.md
+54-3Lines changed: 54 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -12,7 +12,7 @@ ms.devlang: na
12
12
ms.topic: article
13
13
ms.tgt_pltfrm: na
14
14
ms.workload: na
15
-
ms.date: 01/29/2020
15
+
ms.date: 03/27/2020
16
16
ms.author: shvija
17
17
18
18
---
@@ -33,12 +33,63 @@ Brewer's theorem defines consistency and availability as follows:
33
33
Event Hubs is built on top of a partitioned data model. You can configure the number of partitions in your event hub during setup, but you cannot change this value later. Since you must use partitions with Event Hubs, you have to make a decision about availability and consistency for your application.
34
34
35
35
## Availability
36
-
The simplest way to get started with Event Hubs is to use the default behavior. If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync?view=azure-dotnet#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
36
+
The simplest way to get started with Event Hubs is to use the default behavior.
37
+
38
+
#### [Azure.Messaging.EventHubs (5.0.0 or later)](#tab/latest)
39
+
If you create a new **[EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient?view=azure-dotnet)** object and use the **[SendAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.sendasync?view=azure-dotnet)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
40
+
41
+
#### [Microsoft.Azure.EventHubs (4.1.0 or earlier)](#tab/old)
42
+
If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync?view=azure-dotnet#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
43
+
44
+
---
37
45
38
46
For use cases that require the maximum up time, this model is preferred.
39
47
40
48
## Consistency
41
-
In some scenarios, the ordering of events can be important. For example, you may want your back-end system to process an update command before a delete command. In this instance, you can either set the partition key on an event, or use a `PartitionSender` object to only send events to a certain partition. Doing so ensures that when these events are read from the partition, they are read in order.
49
+
In some scenarios, the ordering of events can be important. For example, you may want your back-end system to process an update command before a delete command. In this instance, you can either set the partition key on an event, or use a `PartitionSender` object (if you are using the old Microsoft.Azure.Messaging library) to only send events to a certain partition. Doing so ensures that when these events are read from the partition, they are read in order. If you are using the **Azure.Messaging.EventHubs** library and for more information, see [Migrating code from PartitionSender to EventHubProducerClient for publishing events to a partition](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md#migrating-code-from-partitionsender-to-eventhubproducerclient-for-publishing-events-to-a-partition).
50
+
51
+
#### [Azure.Messaging.EventHubs (5.0.0 or later)](#tab/latest)
52
+
53
+
```csharp
54
+
varconnectionString="<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>";
With this configuration, keep in mind that if the particular partition to which you are sending is unavailable, you will receive an error response. As a point of comparison, if you do not have an affinity to a single partition, the Event Hubs service sends your event to the next available partition.
Copy file name to clipboardExpand all lines: articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -21,7 +21,7 @@ ms.author: spelluru
21
21
This quickstart shows how to send events to and receive events from an event hub using the **Microsoft.Azure.EventHubs** .NET Core library.
22
22
23
23
> [!WARNING]
24
-
> This quickstart uses the old **Microsoft.Azure.EventHubs** package. For a quickstart that uses the latest **Azure.Messaging.EventHubs** library, see [Send and receive events using Azure.Messaging.EventHubs library](get-started-dotnet-standard-send-v2.md). To move your application from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/migration-guide-from-v4.md).
24
+
> This quickstart uses the old **Microsoft.Azure.EventHubs** package. For a quickstart that uses the latest **Azure.Messaging.EventHubs** library, see [Send and receive events using Azure.Messaging.EventHubs library](get-started-dotnet-standard-send-v2.md). To move your application from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md).
25
25
26
26
## Prerequisites
27
27
If you are new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.
description: Tutorial - Use the Azure Toolkit for IntelliJ to develop Spark applications written in Scala, and submit them to an HDInsight Spark cluster.
2
+
title: 'Azure Toolkit for IntelliJ: Spark app - HDInsight'
3
+
description: Use the Azure Toolkit for IntelliJ to develop Spark applications written in Scala, and submit them to an HDInsight Spark cluster.
4
4
author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
8
ms.custom: hdinsightactive
9
-
ms.topic: tutorial
9
+
ms.topic: conceptual
10
10
ms.date: 09/04/2019
11
11
---
12
12
13
-
# Tutorial: Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
13
+
# Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
14
14
15
-
This tutorial demonstrates how to develop Apache Spark applications on Azure HDInsight using the **Azure Toolkit** plug-in for the IntelliJ IDE. [Azure HDInsight](../hdinsight-overview.md) is a managed, open-source analytics service in the cloud that allows you to use open-source frameworks like Hadoop, Apache Spark, Apache Hive, and Apache Kafka.
15
+
This article demonstrates how to develop Apache Spark applications on Azure HDInsight using the **Azure Toolkit** plug-in for the IntelliJ IDE. [Azure HDInsight](../hdinsight-overview.md) is a managed, open-source analytics service in the cloud that allows you to use open-source frameworks like Hadoop, Apache Spark, Apache Hive, and Apache Kafka.
16
16
17
17
You can use the **Azure Toolkit** plug-in in a few ways:
18
18
19
19
* Develop and submit a Scala Spark application to an HDInsight Spark cluster.
20
20
* Access your Azure HDInsight Spark cluster resources.
21
21
* Develop and run a Scala Spark application locally.
22
22
23
-
In this tutorial, you learn how to:
23
+
In this article, you learn how to:
24
24
> [!div class="checklist"]
25
25
> * Use the Azure Toolkit for IntelliJ plug-in
26
26
> * Develop Apache Spark applications
@@ -30,7 +30,7 @@ In this tutorial, you learn how to:
30
30
31
31
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
32
32
33
-
*[Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This tutorial uses Java version 8.0.202.
33
+
*[Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This article uses Java version 8.0.202.
34
34
35
35
* IntelliJ IDEA. This article uses [IntelliJ IDEA Community ver. 2018.3.4](https://www.jetbrains.com/idea/download/).
36
36
@@ -72,7 +72,7 @@ Perform the following steps to install the Scala plugin:
72
72
73
73
| Property | Description |
74
74
| ----- | ----- |
75
-
|Project name| Enter a name. This tutorial uses `myApp`.|
75
+
|Project name| Enter a name. This article uses `myApp`.|
76
76
|Project location| Enter the desired location to save your project.|
77
77
|Project SDK| This might be blank on your first use of IDEA. Select **New...** and navigate to your JDK.|
78
78
|Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. If the Spark cluster version is earlier than 2.0, select **Spark 1.x**. Otherwise, select **Spark2.x**. This example uses **Spark 2.3.0 (Scala 2.11.8)**.|
@@ -467,15 +467,15 @@ If you're not going to continue to use this application, delete the cluster that
467
467
468
468
1. Select **HDInsight clusters** under **Services**.
469
469
470
-
1. In the list of HDInsight clusters that appears, select the **...** next to the cluster that you created for this tutorial.
470
+
1. In the list of HDInsight clusters that appears, select the **...** next to the cluster that you created for this article.
In this tutorial, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/), and then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
478
+
In this article, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/), and then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
479
479
480
480
> [!div class="nextstepaction"]
481
481
> [Analyze Apache Spark data using Power BI](apache-spark-use-bi-tools.md)
0 commit comments