Skip to content

Commit dd12df4

Browse files
authored
Merge pull request #88115 from dagiro/cats50
cats50
2 parents 97229a4 + ad0e325 commit dd12df4

File tree

8 files changed

+13
-9
lines changed

8 files changed

+13
-9
lines changed

articles/hdinsight/domain-joined/apache-domain-joined-manage.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: Learn how to manage Azure HDInsight clusters with Enterprise Securi
44
ms.service: hdinsight
55
author: omidm1
66
ms.author: omidm
7-
ms.reviewer: mamccrea
7+
ms.reviewer: jasonh
88
ms.custom: hdinsightactive
99
ms.topic: conceptual
1010
ms.date: 08/24/2018

articles/hdinsight/domain-joined/apache-domain-joined-run-hive.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ description: Learn how to configure Apache Ranger policies for Hive in an Azure
44
ms.service: hdinsight
55
author: omidm1
66
ms.author: omidm
7-
ms.reviewer: mamccrea
7+
ms.reviewer: jasonh
88
ms.custom: hdinsightactive
99
ms.topic: conceptual
1010
ms.date: 09/24/2018
1111
---
12+
1213
# Configure Apache Hive policies in HDInsight with Enterprise Security Package
1314
Learn how to configure Apache Ranger policies for Apache Hive. In this article, you create two Ranger policies to restrict access to the hivesampletable. The hivesampletable comes with HDInsight clusters. After you have configured the policies, you use Excel and ODBC driver to connect to Hive tables in HDInsight.
1415

articles/hdinsight/domain-joined/hdinsight-use-oozie-domain-joined-clusters.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,12 @@ description: Secure Apache Oozie workflows using the Azure HDInsight Enterprise
44
ms.service: hdinsight
55
author: omidm1
66
ms.author: omidm
7-
ms.reviewer: mamccrea
7+
ms.reviewer: jasonh
88
ms.custom: hdinsightactive,seodec18
99
ms.topic: conceptual
1010
ms.date: 02/15/2019
1111
---
12+
1213
# Run Apache Oozie in HDInsight Hadoop clusters with Enterprise Security Package
1314

1415
Apache Oozie is a workflow and coordination system that manages Apache Hadoop jobs. Oozie is integrated with the Hadoop stack, and it supports the following jobs:

articles/hdinsight/hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,14 +1,15 @@
11
---
22
title: Create Apache Hadoop clusters using .NET - Azure HDInsight
33
description: Learn how to create Apache Hadoop, Apache HBase, Apache Storm, or Apache Spark clusters on Linux for HDInsight using the HDInsight .NET SDK.
4-
author: mamccrea
4+
author: hrasheed-msft
55
ms.reviewer: jasonh
66
ms.service: hdinsight
77
ms.custom: hdinsightactive
88
ms.topic: conceptual
99
ms.date: 08/16/2018
10-
ms.author: mamccrea
10+
ms.author: hrasheed
1111
---
12+
1213
# Create Linux-based clusters in HDInsight using the .NET SDK
1314

1415
[!INCLUDE [selector](../../includes/hdinsight-create-linux-cluster-selector.md)]

articles/hdinsight/storm/apache-storm-develop-csharp-visual-studio-topology.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,7 @@ ms.topic: conceptual
99
ms.date: 11/27/2017
1010
ROBOTS: NOINDEX
1111
---
12+
1213
# Develop C# topologies for Apache Storm by using the Data Lake tools for Visual Studio
1314

1415
Learn how to create a C# Apache Storm topology by using the Azure Data Lake (Apache Hadoop) tools for Visual Studio. This document walks through the process of creating a Storm project in Visual Studio, testing it locally, and deploying it to an Apache Storm on Azure HDInsight cluster.
@@ -149,7 +150,7 @@ For an example topology that uses this component and works with Storm on HDInsig
149150

150151
* **NextTuple**: Called by Storm when the spout is allowed to emit new tuples.
151152

152-
* **Ack** (transactional topology only): Handles acknowledgements initiated by other components in the topology for tuples sent from the spout. Acknowledging a tuple lets the spout know that it was processed successfully by downstream components.
153+
* **Ack** (transactional topology only): Handles acknowledgments initiated by other components in the topology for tuples sent from the spout. Acknowledging a tuple lets the spout know that it was processed successfully by downstream components.
153154

154155
* **Fail** (transactional topology only): Handles tuples that are fail-processing other components in the topology. Implementing a Fail method allows you to re-emit the tuple so that it can be processed again.
155156

@@ -560,7 +561,7 @@ Although it is easy to deploy a topology to a cluster, in some cases, you may ne
560561
561562
1. In **Solution Explorer**, right-click the project, and select **Properties**. In the project properties, change the **Output type** to **Console Application**.
562563

563-
![Screenshot of project properties, with Output type highlighted](./media/apache-storm-develop-csharp-visual-studio-topology/outputtype.png)
564+
![Screenshot of project properties, with Output type highlighted](./media/apache-storm-develop-csharp-visual-studio-topology/hdi-output-type-window.png)
564565

565566
> [!NOTE]
566567
> Remember to change the **Output type** back to **Class Library** before you deploy the topology to a cluster.
@@ -706,7 +707,7 @@ To view errors that have occurred in a running topology, use the following steps
706707

707708
2. For the **Spout** and **Bolts**, the **Last Error** column contains information on the last error.
708709

709-
3. Select the **Spout Id** or **Bolt Id** for the component that has an error listed. On the details page that is displayed, additional error information is listed in the **Errors** section at the bottom of the page.
710+
3. Select the **Spout ID** or **Bolt ID** for the component that has an error listed. On the details page that is displayed, additional error information is listed in the **Errors** section at the bottom of the page.
710711

711712
4. To obtain more information, select a **Port** from the **Executors** section of the page, to see the Storm worker log for the last few minutes.
712713

articles/hdinsight/storm/apache-storm-overview.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -61,7 +61,7 @@ The Nimbus node provides functionality similar to the Apache Hadoop JobTracker,
6161

6262
The default configuration for Apache Storm clusters is to have only one Nimbus node. Storm on HDInsight provides two Nimbus nodes. If the primary node fails, the Storm cluster switches to the secondary node while the primary node is recovered. The following diagram illustrates the task flow configuration for Storm on HDInsight:
6363

64-
![Diagram of nimbus, zookeeper, and supervisor](./media/apache-storm-overview/nimbus.png)
64+
![Diagram of nimbus, zookeeper, and supervisor](./media/apache-storm-overview/storm-diagram-nimbus.png)
6565

6666
## Ease of creation
6767

0 commit comments

Comments
 (0)