You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Replace `clustername` with the name of your HDInsight cluster. Replace `<username>` with the cluster login account for your cluster. Note for ESP clusters use the full UPN (e.g. [email protected]). Replace `password` with the password for the cluster login account.
69
+
Replace `clustername` with the name of your HDInsight cluster. Replace `<username>` with the cluster login account for your cluster. For ESP clusters, use the full UPN (e.g. [email protected]). Replace `password` with the password for the cluster login account.
70
70
71
-
Private endpoints point to a basic load balancer which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
71
+
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
72
72
73
73
---
74
74
@@ -80,19 +80,19 @@ Apache Spark provides its own implementation of HiveServer2, which is sometimes
80
80
81
81
The connection string used is slightly different. Instead of containing `httpPath=/hive2` it's `httpPath/sparkhive2`:
Replace `clustername` with the name of your HDInsight cluster. Replace `<username>` with the cluster login account for your cluster. Note for ESP clusters use the full UPN (e.g. [email protected]). Replace `password` with the password for the cluster login account.
93
+
Replace `clustername` with the name of your HDInsight cluster. Replace `<username>` with the cluster login account for your cluster. For ESP clusters, use the full UPN (e.g. [email protected]). Replace `password` with the password for the cluster login account.
94
94
95
-
Private endpoints point to a basic load balancer which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
95
+
Private endpoints point to a basic load balancer, which can only be accessed from the VNETs peered in the same region. See [constraints on global VNet peering and load balancers](../../virtual-network/virtual-networks-faq.md#what-are-the-constraints-related-to-global-vnet-peering-and-load-balancers) for more info. You can use the `curl` command with `-v` option to troubleshoot any connectivity problems with public or private endpoints before using beeline.
96
96
97
97
---
98
98
@@ -188,7 +188,7 @@ This example is based on using the Beeline client from an SSH connection.
188
188
GROUP BY t4;
189
189
```
190
190
191
-
These statements perform the following actions:
191
+
These statements do the following actions:
192
192
193
193
* `DROP TABLE` - If the table exists, it's deleted.
194
194
@@ -250,7 +250,7 @@ This is a continuation from the prior example. Use the following steps to create
250
250
INSERT OVERWRITE TABLE errorLogs SELECT t1, t2, t3, t4, t5, t6, t7 FROM log4jLogs WHERE t4 = '[ERROR]' AND INPUT__FILE__NAME LIKE '%.log';
251
251
```
252
252
253
-
These statements perform the following actions:
253
+
These statements do the following actions:
254
254
255
255
* **CREATE TABLE IF NOT EXISTS** - If the table doesn't already exist, it's created. Since the **EXTERNAL** keyword isn't used, this statement creates an internal table. Internal tables are stored in the Hive data warehouse and are managed completely by Hive.
256
256
* **STORED AS ORC** - Stores the data in Optimized Row Columnar (ORC) format. ORC format is a highly optimized and efficient format for storing Hive data.
@@ -259,7 +259,7 @@ This is a continuation from the prior example. Use the following steps to create
259
259
> [!NOTE]
260
260
> Unlike external tables, dropping an internal table deletes the underlying data as well.
261
261
262
-
3. To save the file, use **Ctrl**+**_X**, then enter **Y**, and finally **Enter**.
262
+
3. To save the file, use **Ctrl**+**X**, then enter **Y**, and finally **Enter**.
263
263
264
264
4. Use the following to run the file using Beeline:
265
265
@@ -285,15 +285,10 @@ This is a continuation from the prior example. Use the following steps to create
For more general information on Hive in HDInsight, see the following document:
293
-
294
-
* [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md)
295
-
296
-
For more information on other ways you can work with Hadoop on HDInsight, see the following documents:
292
+
* For more general information on Hive in HDInsight, see [Use Apache Hive with Apache Hadoop on HDInsight](hdinsight-use-hive.md)
297
293
298
-
* [Use Apache Pig with Apache Hadoop on HDInsight](hdinsight-use-pig.md)
299
-
* [Use MapReduce with Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md)
294
+
* For more information on other ways you can work with Hadoop on HDInsight, see [Use MapReduce with Apache Hadoop on HDInsight](hdinsight-use-mapreduce.md)
0 commit comments