Skip to content

Commit 195b48a

Browse files
committed
Merge branch 'master' of https://github.com/MicrosoftDocs/azure-docs-pr into updateDirUp
2 parents e0e0285 + aa52b36 commit 195b48a

File tree

14 files changed

+171
-282
lines changed

14 files changed

+171
-282
lines changed

.openpublishing.redirection.json

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4042,7 +4042,7 @@
40424042
},
40434043
{
40444044
"source_path": "articles/azure-resource-manager/templates/template-tutorial-create-encrypted-storage-accounts.md",
4045-
"redirect_url": "articles/azure-resource-manager/templates/template-tutorial-use-template-reference",
4045+
"redirect_url": "/azure/azure-resource-manager/templates/template-tutorial-use-template-reference",
40464046
"redirect_document_id": false
40474047
},
40484048
{

articles/azure-monitor/platform/alerts-metric-create-templates.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1513,6 +1513,10 @@ This section will describe Azure Resource Manager templates for three scenarios
15131513
- Monitoring all virtual machines (in one Azure region) in a subscription.
15141514
- Monitoring a list of virtual machines (in one Azure region) in a subscription.
15151515

1516+
> [!NOTE]
1517+
>
1518+
> In a metric alert rule that monitors multiple resources, only one condition is allowed.
1519+
15161520
### Static threshold alert on all virtual machines in one or more resource groups
15171521

15181522
This template will create a static threshold metric alert rule that monitors Percentage CPU for all virtual machines (in one Azure region) in one or more resource groups.

articles/azure-monitor/platform/alerts-metric-overview.md

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -137,6 +137,10 @@ You can specify the scope of monitoring by a single metric alert rule in one of
137137

138138
Creating metric alert rules that monitor multiple resources is like [creating any other metric alert](alerts-metric.md) that monitors a single resource. Only difference is that you would select all the resources you want to monitor. You can also create these rules through [Azure Resource Manager templates](../../azure-monitor/platform/alerts-metric-create-templates.md#template-for-a-metric-alert-that-monitors-multiple-resources). You will receive individual notifications for each monitored resource.
139139

140+
> [!NOTE]
141+
>
142+
> In a metric alert rule that monitors multiple resources, only one condition is allowed.
143+
140144
## Typical latency
141145

142146
For metric alerts, typically you will get notified in under 5 minutes if you set the alert rule frequency to be 1 min. In cases of heavy load for notification systems, you might see a longer latency.

articles/azure-resource-manager/templates/toc.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@
6868
- name: Advanced
6969
items:
7070
- name: Utilize template reference
71-
href: template-tutorial-create-encrypted-storage-accounts.md
71+
href: template-tutorial-use-template-reference.md
7272
- name: Create multiple instances
7373
displayName: iteration,copy
7474
href: template-tutorial-create-multiple-instances.md

articles/cognitive-services/LUIS/luis-tutorial-bing-spellcheck.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ The endpoint query needs the key passed in the query string parameters for each
6666

6767
The endpoint URL has several values that need to be passed correctly. The Bing Spell Check API v7 key is just another one of these. You must set the **spellCheck** parameter to true and you must set the value of **bing-spell-check-subscription-key** to the key value:
6868

69-
`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appID}?subscription-key={luisKey}&spellCheck=**true**&bing-spell-check-subscription-key=**{bingKey}**&verbose=true&timezoneOffset=0&q={utterance}`
69+
`https://{region}.api.cognitive.microsoft.com/luis/v2.0/apps/{appID}?subscription-key={luisKey}&spellCheck=true&bing-spell-check-subscription-key={bingKey}&verbose=true&timezoneOffset=0&q={utterance}`
7070

7171
## Send misspelled utterance to LUIS
7272
1. In a web browser, copy the preceding string and replace the `region`, `appId`, `luisKey`, and `bingKey` with your own values. Make sure to use the endpoint region, if it is different from your publishing [region](luis-reference-regions.md).

articles/event-hubs/event-hubs-availability-and-consistency.md

Lines changed: 54 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ ms.devlang: na
1212
ms.topic: article
1313
ms.tgt_pltfrm: na
1414
ms.workload: na
15-
ms.date: 01/29/2020
15+
ms.date: 03/27/2020
1616
ms.author: shvija
1717

1818
---
@@ -33,12 +33,63 @@ Brewer's theorem defines consistency and availability as follows:
3333
Event Hubs is built on top of a partitioned data model. You can configure the number of partitions in your event hub during setup, but you cannot change this value later. Since you must use partitions with Event Hubs, you have to make a decision about availability and consistency for your application.
3434

3535
## Availability
36-
The simplest way to get started with Event Hubs is to use the default behavior. If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync?view=azure-dotnet#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
36+
The simplest way to get started with Event Hubs is to use the default behavior.
37+
38+
#### [Azure.Messaging.EventHubs (5.0.0 or later)](#tab/latest)
39+
If you create a new **[EventHubProducerClient](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient?view=azure-dotnet)** object and use the **[SendAsync](/dotnet/api/azure.messaging.eventhubs.producer.eventhubproducerclient.sendasync?view=azure-dotnet)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
40+
41+
#### [Microsoft.Azure.EventHubs (4.1.0 or earlier)](#tab/old)
42+
If you create a new **[EventHubClient](/dotnet/api/microsoft.azure.eventhubs.eventhubclient)** object and use the **[Send](/dotnet/api/microsoft.azure.eventhubs.eventhubclient.sendasync?view=azure-dotnet#Microsoft_Azure_EventHubs_EventHubClient_SendAsync_Microsoft_Azure_EventHubs_EventData_)** method, your events are automatically distributed between partitions in your event hub. This behavior allows for the greatest amount of up time.
43+
44+
---
3745

3846
For use cases that require the maximum up time, this model is preferred.
3947

4048
## Consistency
41-
In some scenarios, the ordering of events can be important. For example, you may want your back-end system to process an update command before a delete command. In this instance, you can either set the partition key on an event, or use a `PartitionSender` object to only send events to a certain partition. Doing so ensures that when these events are read from the partition, they are read in order.
49+
In some scenarios, the ordering of events can be important. For example, you may want your back-end system to process an update command before a delete command. In this instance, you can either set the partition key on an event, or use a `PartitionSender` object (if you are using the old Microsoft.Azure.Messaging library) to only send events to a certain partition. Doing so ensures that when these events are read from the partition, they are read in order. If you are using the **Azure.Messaging.EventHubs** library and for more information, see [Migrating code from PartitionSender to EventHubProducerClient for publishing events to a partition](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md#migrating-code-from-partitionsender-to-eventhubproducerclient-for-publishing-events-to-a-partition).
50+
51+
#### [Azure.Messaging.EventHubs (5.0.0 or later)](#tab/latest)
52+
53+
```csharp
54+
var connectionString = "<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>";
55+
var eventHubName = "<< NAME OF THE EVENT HUB >>";
56+
57+
await using (var producerClient = new EventHubProducerClient(connectionString, eventHubName))
58+
{
59+
var batchOptions = new CreateBatchOptions() { PartitionId = "my-partition-id" };
60+
using EventDataBatch eventBatch = await producerClient.CreateBatchAsync(batchOptions);
61+
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("First")));
62+
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Second")));
63+
64+
await producerClient.SendAsync(eventBatch);
65+
}
66+
```
67+
68+
#### [Microsoft.Azure.EventHubs (4.1.0 or earlier)](#tab/old)
69+
70+
```csharp
71+
var connectionString = "<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>";
72+
var eventHubName = "<< NAME OF THE EVENT HUB >>";
73+
74+
var connectionStringBuilder = new EventHubsConnectionStringBuilder(connectionString){ EntityPath = eventHubName };
75+
var eventHubClient = EventHubClient.CreateFromConnectionString(connectionStringBuilder.ToString());
76+
PartitionSender partitionSender = eventHubClient.CreatePartitionSender("my-partition-id");
77+
try
78+
{
79+
EventDataBatch eventBatch = partitionSender.CreateBatch();
80+
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("First")));
81+
eventBatch.TryAdd(new EventData(Encoding.UTF8.GetBytes("Second")));
82+
83+
await partitionSender.SendAsync(eventBatch);
84+
}
85+
finally
86+
{
87+
await partitionSender.CloseAsync();
88+
await eventHubClient.CloseAsync();
89+
}
90+
```
91+
92+
---
4293

4394
With this configuration, keep in mind that if the particular partition to which you are sending is unavailable, you will receive an error response. As a point of comparison, if you do not have an affinity to a single partition, the Event Hubs service sends your event to the next available partition.
4495

articles/event-hubs/event-hubs-dotnet-standard-getstarted-send.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -21,7 +21,7 @@ ms.author: spelluru
2121
This quickstart shows how to send events to and receive events from an event hub using the **Microsoft.Azure.EventHubs** .NET Core library.
2222

2323
> [!WARNING]
24-
> This quickstart uses the old **Microsoft.Azure.EventHubs** package. For a quickstart that uses the latest **Azure.Messaging.EventHubs** library, see [Send and receive events using Azure.Messaging.EventHubs library](get-started-dotnet-standard-send-v2.md). To move your application from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/migration-guide-from-v4.md).
24+
> This quickstart uses the old **Microsoft.Azure.EventHubs** package. For a quickstart that uses the latest **Azure.Messaging.EventHubs** library, see [Send and receive events using Azure.Messaging.EventHubs library](get-started-dotnet-standard-send-v2.md). To move your application from using the old library to new one, see the [Guide to migrate from Microsoft.Azure.EventHubs to Azure.Messaging.EventHubs](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventhub/Azure.Messaging.EventHubs/MigrationGuide.md).
2525
2626
## Prerequisites
2727
If you are new to Azure Event Hubs, see [Event Hubs overview](event-hubs-about.md) before you do this quickstart.

articles/expressroute/expressroute-config-samples-routing.md

Lines changed: 1 addition & 204 deletions
Large diffs are not rendered by default.

articles/hdinsight/spark/apache-spark-intellij-tool-plugin.md

Lines changed: 10 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -1,26 +1,26 @@
11
---
2-
title: 'Tutorial - Azure Toolkit for IntelliJ: Spark app - HDInsight'
3-
description: Tutorial - Use the Azure Toolkit for IntelliJ to develop Spark applications written in Scala, and submit them to an HDInsight Spark cluster.
2+
title: 'Azure Toolkit for IntelliJ: Spark app - HDInsight'
3+
description: Use the Azure Toolkit for IntelliJ to develop Spark applications written in Scala, and submit them to an HDInsight Spark cluster.
44
author: hrasheed-msft
55
ms.author: hrasheed
66
ms.reviewer: jasonh
77
ms.service: hdinsight
88
ms.custom: hdinsightactive
9-
ms.topic: tutorial
9+
ms.topic: conceptual
1010
ms.date: 09/04/2019
1111
---
1212

13-
# Tutorial: Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
13+
# Use Azure Toolkit for IntelliJ to create Apache Spark applications for HDInsight cluster
1414

15-
This tutorial demonstrates how to develop Apache Spark applications on Azure HDInsight using the **Azure Toolkit** plug-in for the IntelliJ IDE. [Azure HDInsight](../hdinsight-overview.md) is a managed, open-source analytics service in the cloud that allows you to use open-source frameworks like Hadoop, Apache Spark, Apache Hive, and Apache Kafka.
15+
This article demonstrates how to develop Apache Spark applications on Azure HDInsight using the **Azure Toolkit** plug-in for the IntelliJ IDE. [Azure HDInsight](../hdinsight-overview.md) is a managed, open-source analytics service in the cloud that allows you to use open-source frameworks like Hadoop, Apache Spark, Apache Hive, and Apache Kafka.
1616

1717
You can use the **Azure Toolkit** plug-in in a few ways:
1818

1919
* Develop and submit a Scala Spark application to an HDInsight Spark cluster.
2020
* Access your Azure HDInsight Spark cluster resources.
2121
* Develop and run a Scala Spark application locally.
2222

23-
In this tutorial, you learn how to:
23+
In this article, you learn how to:
2424
> [!div class="checklist"]
2525
> * Use the Azure Toolkit for IntelliJ plug-in
2626
> * Develop Apache Spark applications
@@ -30,7 +30,7 @@ In this tutorial, you learn how to:
3030

3131
* An Apache Spark cluster on HDInsight. For instructions, see [Create Apache Spark clusters in Azure HDInsight](apache-spark-jupyter-spark-sql.md).
3232

33-
* [Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This tutorial uses Java version 8.0.202.
33+
* [Oracle Java Development kit](https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html). This article uses Java version 8.0.202.
3434

3535
* IntelliJ IDEA. This article uses [IntelliJ IDEA Community ver. 2018.3.4](https://www.jetbrains.com/idea/download/).
3636

@@ -72,7 +72,7 @@ Perform the following steps to install the Scala plugin:
7272

7373
| Property | Description |
7474
| ----- | ----- |
75-
|Project name| Enter a name. This tutorial uses `myApp`.|
75+
|Project name| Enter a name. This article uses `myApp`.|
7676
|Project&nbsp;location| Enter the desired location to save your project.|
7777
|Project SDK| This might be blank on your first use of IDEA. Select **New...** and navigate to your JDK.|
7878
|Spark Version|The creation wizard integrates the proper version for Spark SDK and Scala SDK. If the Spark cluster version is earlier than 2.0, select **Spark 1.x**. Otherwise, select **Spark2.x**. This example uses **Spark 2.3.0 (Scala 2.11.8)**.|
@@ -467,15 +467,15 @@ If you're not going to continue to use this application, delete the cluster that
467467

468468
1. Select **HDInsight clusters** under **Services**.
469469

470-
1. In the list of HDInsight clusters that appears, select the **...** next to the cluster that you created for this tutorial.
470+
1. In the list of HDInsight clusters that appears, select the **...** next to the cluster that you created for this article.
471471

472472
1. Select **Delete**. Select **Yes**.
473473

474474
![Azure portal delete HDInsight cluster](./media/apache-spark-intellij-tool-plugin/hdinsight-azure-portal-delete-cluster.png "Delete HDInsight cluster")
475475

476476
## Next steps
477477

478-
In this tutorial, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/), and then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
478+
In this article, you learned how to use the Azure Toolkit for IntelliJ plug-in to develop Apache Spark applications written in [Scala](https://www.scala-lang.org/), and then submitted them to an HDInsight Spark cluster directly from the IntelliJ integrated development environment (IDE). Advance to the next article to see how the data you registered in Apache Spark can be pulled into a BI analytics tool such as Power BI.
479479

480480
> [!div class="nextstepaction"]
481481
> [Analyze Apache Spark data using Power BI](apache-spark-use-bi-tools.md)

0 commit comments

Comments
 (0)