You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-use-hive-beeline.md
+1-157Lines changed: 1 addition & 157 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -9,107 +9,11 @@ ms.topic: conceptual
9
9
ms.custom: seoapr2020
10
10
ms.date: 04/17/2020
11
11
---
12
-
13
12
# Use the Apache Beeline client with Apache Hive
14
13
15
14
Learn how to use [Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline–NewCommandLineShell) to run Apache Hive queries on HDInsight.
16
15
17
-
Beeline is a Hive client that is included on the head nodes of your HDInsight cluster. To install Beeline locally, see [Install beeline client](#install-beeline-client), below. Beeline uses JDBC to connect to HiveServer2, a service hosted on your HDInsight cluster. You can also use Beeline to access Hive on HDInsight remotely over the internet. The following examples provide the most common connection strings used to connect to HDInsight from Beeline.
18
-
19
-
## Types of connections
20
-
21
-
### From an SSH session
22
-
23
-
When connecting from an SSH session to a cluster headnode, you can then connect to the `headnodehost` address on port `10001`:
When connecting from a client to HDInsight over an Azure Virtual Network, you must provide the fully qualified domain name (FQDN) of a cluster head node. Since this connection is made directly to the cluster nodes, the connection uses port `10001`:
Replace `<headnode-FQDN>` with the fully qualified domain name of a cluster headnode. To find the fully qualified domain name of a headnode, use the information in the [Manage HDInsight using the Apache Ambari REST API](../hdinsight-hadoop-manage-ambari-rest-api.md#get-the-fqdn-of-cluster-nodes) document.
40
-
41
-
---
42
-
43
-
### To HDInsight Enterprise Security Package (ESP) cluster using Kerberos
44
-
45
-
When connecting from a client to an Enterprise Security Package (ESP) cluster joined to Azure Active Directory (AAD)-DS on a machine in same realm of the cluster, you must also specify the domain name `<AAD-Domain>` and the name of a domain user account with permissions to access the cluster `<username>`:
Replace `<username>` with the name of an account on the domain with permissions to access the cluster. Replace `<AAD-DOMAIN>` with the name of the Azure Active Directory (AAD) that the cluster is joined to. Use an uppercase string for the `<AAD-DOMAIN>` value, otherwise the credential won't be found. Check `/etc/krb5.conf` for the realm names if needed.
53
-
54
-
To find the JDBC URL from Ambari:
55
-
56
-
1. From a web browser, navigate to `https://CLUSTERNAME.azurehdinsight.net/#/main/services/HIVE/summary`, where `CLUSTERNAME` is the name of your cluster. Ensure that HiveServer2 is running.
57
-
58
-
1. Use clipboard to copy the HiveServer2 JDBC URL.
59
-
60
-
---
61
-
62
-
### Over public or private endpoints
63
-
64
-
When connecting to a cluster using the public or private endpoints, you must provide the cluster login account name (default `admin`) and password. For example, using Beeline from a client system to connect to the `clustername.azurehdinsight.net` address. This connection is made over port `443`, and is encrypted using TLS/SSL.
65
-
66
-
Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, [email protected]). Replace `password` with the password for the cluster login account.
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
79
-
80
-
---
81
-
82
-
### Use Beeline with Apache Spark
83
-
84
-
Apache Spark provides its own implementation of HiveServer2, which is sometimes referred to as the Spark Thrift server. This service uses Spark SQL to resolve queries instead of Hive. And may provide better performance depending on your query.
85
-
86
-
#### Through public or private endpoints
87
-
88
-
The connection string used is slightly different. Instead of containing `httpPath=/hive2` it uses `httpPath/sparkhive2`. Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, [email protected]). Replace `password` with the password for the cluster login account.
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
101
-
102
-
---
103
-
104
-
#### From cluster head or inside Azure Virtual Network with Apache Spark
105
-
106
-
When connecting directly from the cluster head node, or from a resource inside the same Azure Virtual Network as the HDInsight cluster, port `10002` should be used for Spark Thrift server instead of `10001`. The following example shows how to connect directly to the head node:
Beeline is a Hive client that is included on the head nodes of your HDInsight cluster. To connect to the Beeline client installed on your HDInsight cluster, or install Beeline locally, see [Connect to or install Apache Beeline](connect-install-beeline.md). Beeline uses JDBC to connect to HiveServer2, a service hosted on your HDInsight cluster. You can also use Beeline to access Hive on HDInsight remotely over the internet. The following examples provide the most common connection strings used to connect to HDInsight from Beeline.
113
17
114
18
## Prerequisites for examples
115
19
@@ -295,66 +199,6 @@ This example is a continuation from the prior example. Use the following steps t
Although Beeline is included on the head nodes, you may want to install it locally. The install steps for a local machine are based on a [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/install-win10).
301
-
302
-
1. Update package lists. Enter the following command in your bash shell:
303
-
304
-
```bash
305
-
sudo apt-get update
306
-
```
307
-
308
-
1. Install Java if not installed. You can check with the `which java` command.
309
-
310
-
1. If no java package is installed, enter the following command:
311
-
312
-
```bash
313
-
sudo apt install openjdk-11-jre-headless
314
-
```
315
-
316
-
1. Open the bashrc file (often found in ~/.bashrc): `nano ~/.bashrc`.
317
-
318
-
1. Amend the bashrc file. Add the following line at the end of the file:
1. Unpack the archives, enter the following commands:
334
-
335
-
```bash
336
-
tar -xvzf hadoop-2.7.3.tar.gz
337
-
tar -xvzf apache-hive-1.2.1-bin.tar.gz
338
-
```
339
-
340
-
1. Further amend the bashrc file. You'll need to identify the path to where the archives were unpacked. If using the [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/install-win10), and you followed the steps exactly, your path would be `/mnt/c/Users/user/`, where `user` is your user name.
341
-
342
-
1. Open the file: `nano ~/.bashrc`
343
-
344
-
1. Modify the commands below with the appropriate path and then enter them at the end of the bashrc file:
title: Connect to or install Apache Beeline - Azure HDInsight
3
+
description: Learn how to connect to the Apache Beeline client to run Hive queries with Hadoop on HDInsight. Beeline is a utility for working with HiveServer2 over JDBC.
4
+
author: hrasheed-msft
5
+
ms.author: hrasheed
6
+
ms.reviewer: jasonh
7
+
ms.service: hdinsight
8
+
ms.topic: conceptual
9
+
ms.custom: seoapr2020
10
+
ms.date: 05/27/2020
11
+
---
12
+
# Connect to Apache Beeline on HDInsight or install it locally
13
+
14
+
[Apache Beeline](https://cwiki.apache.org/confluence/display/Hive/HiveServer2+Clients#HiveServer2Clients-Beeline–NewCommandLineShell) is a Hive client that is included on the head nodes of your HDInsight cluster. This article describes how to connect to the Beeline client installed on your HDInsight cluster across different types of connections. It also discusses how to [Install the Beeline client locally](#install-beeline-client).
15
+
16
+
## Types of connections
17
+
18
+
### From an SSH session
19
+
20
+
When connecting from an SSH session to a cluster headnode, you can then connect to the `headnodehost` address on port `10001`:
When connecting from a client to HDInsight over an Azure Virtual Network, you must provide the fully qualified domain name (FQDN) of a cluster head node. Since this connection is made directly to the cluster nodes, the connection uses port `10001`:
Replace `<headnode-FQDN>` with the fully qualified domain name of a cluster headnode. To find the fully qualified domain name of a headnode, use the information in the [Manage HDInsight using the Apache Ambari REST API](../hdinsight-hadoop-manage-ambari-rest-api.md#get-the-fqdn-of-cluster-nodes) document.
35
+
36
+
### To HDInsight Enterprise Security Package (ESP) cluster using Kerberos
37
+
38
+
When connecting from a client to an Enterprise Security Package (ESP) cluster joined to Azure Active Directory (AAD)-DS on a machine in same realm of the cluster, you must also specify the domain name `<AAD-Domain>` and the name of a domain user account with permissions to access the cluster `<username>`:
Replace `<username>` with the name of an account on the domain with permissions to access the cluster. Replace `<AAD-DOMAIN>` with the name of the Azure Active Directory (AAD) that the cluster is joined to. Use an uppercase string for the `<AAD-DOMAIN>` value, otherwise the credential won't be found. Check `/etc/krb5.conf` for the realm names if needed.
46
+
47
+
To find the JDBC URL from Ambari:
48
+
49
+
1. From a web browser, navigate to `https://CLUSTERNAME.azurehdinsight.net/#/main/services/HIVE/summary`, where `CLUSTERNAME` is the name of your cluster. Ensure that HiveServer2 is running.
50
+
51
+
1. Use clipboard to copy the HiveServer2 JDBC URL.
52
+
53
+
### Over public or private endpoints
54
+
55
+
When connecting to a cluster using the public or private endpoints, you must provide the cluster login account name (default `admin`) and password. For example, using Beeline from a client system to connect to the `clustername.azurehdinsight.net` address. This connection is made over port `443`, and is encrypted using TLS/SSL.
56
+
57
+
Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, [email protected]). Replace `password` with the password for the cluster login account.
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
70
+
71
+
### Use Beeline with Apache Spark
72
+
73
+
Apache Spark provides its own implementation of HiveServer2, which is sometimes referred to as the Spark Thrift server. This service uses Spark SQL to resolve queries instead of Hive. And may provide better performance depending on your query.
74
+
75
+
#### Through public or private endpoints
76
+
77
+
The connection string used is slightly different. Instead of containing `httpPath=/hive2` it uses `httpPath/sparkhive2`. Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, [email protected]). Replace `password` with the password for the cluster login account.
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
90
+
91
+
#### From cluster head or inside Azure Virtual Network with Apache Spark
92
+
93
+
When connecting directly from the cluster head node, or from a resource inside the same Azure Virtual Network as the HDInsight cluster, port `10002` should be used for Spark Thrift server instead of `10001`. The following example shows how to connect directly to the head node:
Although Beeline is included on the head nodes, you may want to install it locally. The install steps for a local machine are based on a [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/install-win10).
102
+
103
+
1. Update package lists. Enter the following command in your bash shell:
104
+
105
+
```bash
106
+
sudo apt-get update
107
+
```
108
+
109
+
1. Install Java if not installed. You can check with the `which java` command.
110
+
111
+
1. If no java package is installed, enter the following command:
112
+
113
+
```bash
114
+
sudo apt install openjdk-11-jre-headless
115
+
```
116
+
117
+
1. Open the bashrc file (often found in~/.bashrc): `nano ~/.bashrc`.
118
+
119
+
1. Amend the bashrc file. Add the following line at the end of the file:
1. Unpack the archives, enter the following commands:
135
+
136
+
```bash
137
+
tar -xvzf hadoop-2.7.3.tar.gz
138
+
tar -xvzf apache-hive-1.2.1-bin.tar.gz
139
+
```
140
+
141
+
1. Further amend the bashrc file. You'll need to identify the path to where the archives were unpacked. If using the [Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/install-win10), and you followed the steps exactly, your path would be `/mnt/c/Users/user/`, where `user` is your user name.
142
+
143
+
1. Open the file: `nano ~/.bashrc`
144
+
145
+
1. Modify the commands below with the appropriate path and then enter them at the end of the bashrc file:
0 commit comments