Skip to content

Commit aa88dd2

Browse files
Merge pull request #251767 from v-akarnase/patch-22
Update apache-hive-warehouse-connector-zeppelin.md
2 parents 6b1fc1e + 27cb346 commit aa88dd2

File tree

1 file changed

+5
-5
lines changed

1 file changed

+5
-5
lines changed

articles/hdinsight/interactive-query/apache-hive-warehouse-connector-zeppelin.md

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -5,12 +5,12 @@ author: reachnijel
55
ms.author: nijelsf
66
ms.service: hdinsight
77
ms.topic: how-to
8-
ms.date: 07/18/2022
8+
ms.date: 09/27/2023
99
---
1010

1111
# Integrate Apache Zeppelin with Hive Warehouse Connector in Azure HDInsight
1212

13-
HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we'll focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector.
13+
HDInsight Spark clusters include Apache Zeppelin notebooks with different interpreters. In this article, we focus only on the Livy interpreter to access Hive tables from Spark using Hive Warehouse Connector.
1414

1515
> [!NOTE]
1616
> This article contains references to a term that Microsoft no longer uses. When the term is removed from the software, we'll remove it from this article.
@@ -21,7 +21,7 @@ Complete the [Hive Warehouse Connector setup](apache-hive-warehouse-connector.md
2121

2222
## Getting started
2323

24-
1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Apache Spark cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
24+
1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Apache Spark cluster. Edit the following command by replacing CLUSTERNAME with the name of your cluster, and then enter the command:
2525

2626
```cmd
2727
@@ -78,7 +78,7 @@ Following configurations are required to access hive tables from Zeppelin with t
7878
| livy.spark.security.credentials.hiveserver2.enabled | true |
7979
| livy.spark.sql.hive.llap | true |
8080
| livy.spark.yarn.security.credentials.hiveserver2.enabled | true |
81-
| livy.superusers | livy,zeppelin |
81+
| livy.superusers | livy, zeppelin |
8282
| livy.spark.jars | `file:///usr/hdp/current/hive_warehouse_connector/hive-warehouse-connector-assembly-VERSION.jar`.<br>Replace VERSION with value you obtained from [Getting started](#getting-started), earlier. |
8383
| livy.spark.submit.pyFiles | `file:///usr/hdp/current/hive_warehouse_connector/pyspark_hwc-VERSION.zip`.<br>Replace VERSION with value you obtained from [Getting started](#getting-started), earlier. |
8484
| livy.spark.sql.hive.hiveserver2.jdbc.url | Set it to the HiveServer2 Interactive JDBC URL of the Interactive Query cluster. |
@@ -90,7 +90,7 @@ Following configurations are required to access hive tables from Zeppelin with t
9090
|---|---|
9191
| livy.spark.sql.hive.hiveserver2.jdbc.url.principal | `hive/_HOST@<AAD-Domain>` |
9292
93-
* Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Interactive Query cluster. Look for `default_realm` parameter in the `/etc/krb5.conf` file. Replace `<AAD-DOMAIN>` with this value as an uppercase string, otherwise the credential won't be found.
93+
* Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your Interactive Query cluster. Look for `default_realm` parameter in the `/etc/krb5.conf` file. Replace `<AAD-DOMAIN>` with this value as an uppercase string, otherwise the credential cannot be found.
9494
9595
:::image type="content" source="./media/apache-hive-warehouse-connector/aad-domain.png" alt-text="hive warehouse connector AAD Domain" border="true":::
9696

0 commit comments

Comments
 (0)