You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/domain-joined/apache-domain-joined-run-hive.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,12 @@ description: Learn how to configure Apache Ranger policies for Hive in an Azure
4
4
ms.service: hdinsight
5
5
author: omidm1
6
6
ms.author: omidm
7
-
ms.reviewer: mamccrea
7
+
ms.reviewer: jasonh
8
8
ms.custom: hdinsightactive
9
9
ms.topic: conceptual
10
10
ms.date: 09/24/2018
11
11
---
12
+
12
13
# Configure Apache Hive policies in HDInsight with Enterprise Security Package
13
14
Learn how to configure Apache Ranger policies for Apache Hive. In this article, you create two Ranger policies to restrict access to the hivesampletable. The hivesampletable comes with HDInsight clusters. After you have configured the policies, you use Excel and ODBC driver to connect to Hive tables in HDInsight.
Copy file name to clipboardExpand all lines: articles/hdinsight/domain-joined/hdinsight-use-oozie-domain-joined-clusters.md
+2-1Lines changed: 2 additions & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -4,11 +4,12 @@ description: Secure Apache Oozie workflows using the Azure HDInsight Enterprise
4
4
ms.service: hdinsight
5
5
author: omidm1
6
6
ms.author: omidm
7
-
ms.reviewer: mamccrea
7
+
ms.reviewer: jasonh
8
8
ms.custom: hdinsightactive,seodec18
9
9
ms.topic: conceptual
10
10
ms.date: 02/15/2019
11
11
---
12
+
12
13
# Run Apache Oozie in HDInsight Hadoop clusters with Enterprise Security Package
13
14
14
15
Apache Oozie is a workflow and coordination system that manages Apache Hadoop jobs. Oozie is integrated with the Hadoop stack, and it supports the following jobs:
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-hadoop-create-linux-clusters-dotnet-sdk.md
+3-2Lines changed: 3 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,14 +1,15 @@
1
1
---
2
2
title: Create Apache Hadoop clusters using .NET - Azure HDInsight
3
3
description: Learn how to create Apache Hadoop, Apache HBase, Apache Storm, or Apache Spark clusters on Linux for HDInsight using the HDInsight .NET SDK.
4
-
author: mamccrea
4
+
author: hrasheed-msft
5
5
ms.reviewer: jasonh
6
6
ms.service: hdinsight
7
7
ms.custom: hdinsightactive
8
8
ms.topic: conceptual
9
9
ms.date: 08/16/2018
10
-
ms.author: mamccrea
10
+
ms.author: hrasheed
11
11
---
12
+
12
13
# Create Linux-based clusters in HDInsight using the .NET SDK
Copy file name to clipboardExpand all lines: articles/hdinsight/storm/apache-storm-develop-csharp-visual-studio-topology.md
+4-3Lines changed: 4 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,6 +9,7 @@ ms.topic: conceptual
9
9
ms.date: 11/27/2017
10
10
ROBOTS: NOINDEX
11
11
---
12
+
12
13
# Develop C# topologies for Apache Storm by using the Data Lake tools for Visual Studio
13
14
14
15
Learn how to create a C# Apache Storm topology by using the Azure Data Lake (Apache Hadoop) tools for Visual Studio. This document walks through the process of creating a Storm project in Visual Studio, testing it locally, and deploying it to an Apache Storm on Azure HDInsight cluster.
@@ -149,7 +150,7 @@ For an example topology that uses this component and works with Storm on HDInsig
149
150
150
151
***NextTuple**: Called by Storm when the spout is allowed to emit new tuples.
151
152
152
-
***Ack** (transactional topology only): Handles acknowledgements initiated by other components in the topology for tuples sent from the spout. Acknowledging a tuple lets the spout know that it was processed successfully by downstream components.
153
+
***Ack** (transactional topology only): Handles acknowledgments initiated by other components in the topology for tuples sent from the spout. Acknowledging a tuple lets the spout know that it was processed successfully by downstream components.
153
154
154
155
***Fail** (transactional topology only): Handles tuples that are fail-processing other components in the topology. Implementing a Fail method allows you to re-emit the tuple so that it can be processed again.
155
156
@@ -560,7 +561,7 @@ Although it is easy to deploy a topology to a cluster, in some cases, you may ne
560
561
561
562
1. In **Solution Explorer**, right-click the project, and select **Properties**. In the project properties, change the **Output type** to **Console Application**.
562
563
563
-

564
+

564
565
565
566
> [!NOTE]
566
567
> Remember to change the **Output type** back to **Class Library** before you deploy the topology to a cluster.
@@ -706,7 +707,7 @@ To view errors that have occurred in a running topology, use the following steps
706
707
707
708
2. For the **Spout** and **Bolts**, the **Last Error** column contains information on the last error.
708
709
709
-
3. Select the **Spout Id** or **Bolt Id** for the component that has an error listed. On the details page that is displayed, additional error information is listed in the **Errors** section at the bottom of the page.
710
+
3. Select the **Spout ID** or **Bolt ID** for the component that has an error listed. On the details page that is displayed, additional error information is listed in the **Errors** section at the bottom of the page.
710
711
711
712
4. To obtain more information, select a **Port** from the **Executors** section of the page, to see the Storm worker log for the last few minutes.
Copy file name to clipboardExpand all lines: articles/hdinsight/storm/apache-storm-overview.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -61,7 +61,7 @@ The Nimbus node provides functionality similar to the Apache Hadoop JobTracker,
61
61
62
62
The default configuration for Apache Storm clusters is to have only one Nimbus node. Storm on HDInsight provides two Nimbus nodes. If the primary node fails, the Storm cluster switches to the secondary node while the primary node is recovered. The following diagram illustrates the task flow configuration for Storm on HDInsight:
63
63
64
-

64
+

0 commit comments