You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/synapse-analytics/spark/apache-spark-external-metastore.md
+37-36Lines changed: 37 additions & 36 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,7 @@
1
1
---
2
2
title: Use external Hive Metastore for Azure Synapse Spark Pool
3
3
description: Learn how to set up external Hive Metastore for Azure Synapse Spark Pool.
4
-
keywords: external Hive metastore,share,Synapse
4
+
keywords: external Hive Metastore,share,Synapse
5
5
ms.service: synapse-analytics
6
6
ms.topic: conceptual
7
7
ms.subservice: spark
@@ -10,26 +10,25 @@ author: yanancai
10
10
ms.date: 09/08/2021
11
11
---
12
12
13
-
# Use external Hive Metastore for Synapse Spark Pool (Preview)
13
+
# Use external Hive Metastore for Synapse Spark Pool
14
14
15
-
Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore Service) compatible metastore as their catalog. When customers want to persist the Hive catalog outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you learn how to connect Synapse Spark to an external Apache Hive Metastore.
15
+
Azure Synapse Analytics allows Apache Spark pools in the same workspace to share a managed HMS (Hive Metastore) compatible metastore as their catalog. When customers want to persist the Hive catalog metadata outside of the workspace, and share catalog objects with other computational engines outside of the workspace, such as HDInsight and Azure Databricks, they can connect to an external Hive Metastore. In this article, you can learn how to connect Synapse Spark to an external Apache Hive Metastore.
16
16
17
-
## Supported Hive metastore versions
18
-
19
-
The feature works with both Spark 2.4 and Spark 3.0. The following table shows the supported Hive metastore service (HMS) versions for each Spark version.
17
+
## Supported Hive Metastore versions
20
18
19
+
The feature works with both Spark 2.4 and Spark 3.1. The following table shows the supported Hive Metastore versions for each Spark version.
> Only Azure SQL Database and Azure Database for MySQL are supported as an external Hive metastore.
29
+
> Only Azure SQL Database and Azure Database for MySQL are supported as an external Hive Metastore. And currently we only support User-Password authentication. If the provided database is blank, please provision it via [Hive Schema Tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool) to create database schema.
31
30
32
-
Follow below steps to set up a linked service to the external Hive metastore in Synapse workspace.
31
+
Follow below steps to set up a linked service to the external Hive Metastore in Synapse workspace.
33
32
34
33
1. Open Synapse Studio, go to **Manage > Linked services** at left, click **New** to create a new linked service.
35
34
@@ -39,18 +38,18 @@ Follow below steps to set up a linked service to the external Hive metastore in
39
38
40
39
3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used to configure Spark shortly.
41
40
42
-
4. You can either select **Azure SQL Database**/**Azure Database for MySQL** for the external Hive metastore from Azure subscription list, or enter the info manually.
41
+
4. You can either select **Azure SQL Database**/**Azure Database for MySQL** for the external Hive Metastore from Azure subscription list, or enter the info manually.
43
42
44
-
5.Currently we only support User-Password authentication. Provide **User name** and **Password** to set up the connection.
43
+
5. Provide **User name** and **Password** to set up the connection.
45
44
46
45
6.**Test connection** to verify the username and password.
47
46
48
47
7. Click **Create** to create the linked service.
49
48
50
49
### Test connection and get the metastore version in notebook
51
-
Some network security rule settings may block access from Spark pool to the external Hive metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive metastore DB.
50
+
Some network security rule settings may block access from Spark pool to the external Hive Metastore DB. Before you configure the Spark pool, run below code in any Spark pool notebook to test connection to the external Hive Metastore DB.
52
51
53
-
You can also get your Hive metastore version from the output results. The Hive metastore version will be used in the Spark configuration.
52
+
You can also get your Hive Metastore version from the output results. The Hive Metastore version will be used in the Spark configuration.
54
53
55
54
#### Connection testing code for Azure SQL
56
55
```scala
@@ -62,9 +61,9 @@ try {
62
61
valconnection=DriverManager.getConnection(url)
63
62
valresult= connection.createStatement().executeQuery("select t.SCHEMA_VERSION from VERSION t")
64
63
result.next();
65
-
println(s"Successful to test connection. Hive metastore version is ${result.getString(1)}")
64
+
println(s"Successful to test connection. Hive Metastore version is ${result.getString(1)}")
66
65
} catch {
67
-
caseex: Throwable=>println(s"Failed to establish connection:\n$ex")
66
+
caseex: Throwable=>println(s"Failed to establish connection:\n$ex")
valresult= connection.createStatement().executeQuery("select t.SCHEMA_VERSION from VERSION t")
80
79
result.next();
81
-
println(s"Successful to test connection. Hive metastore version is ${result.getString(1)}")
80
+
println(s"Successful to test connection. Hive Metastore version is ${result.getString(1)}")
82
81
} catch {
83
-
caseex: Throwable=>println(s"Failed to establish connection:\n$ex")
82
+
caseex: Throwable=>println(s"Failed to establish connection:\n$ex")
84
83
}
85
84
```
86
85
87
-
## Configure Spark to use the external Hive metastore
88
-
After creating the linked service to the external Hive metastore successfully, you need to setup a few configurations in the Spark to use the external Hive metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
86
+
## Configure Spark to use the external Hive Metastore
87
+
After creating the linked service to the external Hive Metastore successfully, you need to setup a few Spark configurations to use the external Hive Metastore. You can both set up the configuration at Spark pool level, or at Spark session level.
89
88
90
89
Here are the configurations and descriptions:
91
90
92
91
> [!NOTE]
93
-
> Synapse aims to works smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not full compatible with the OSS HMS 3.1. For OSS HMS 3.1, please check [here](#hms-schema-change-for-oss-hms-31).
92
+
> Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not fully compatible with the OSS HMS 3.1. For OSS HMS 3.1, please check [here](#hms-schema-change-for-oss-hms-31).
94
93
95
94
|Spark config|Description|
96
95
|--|--|
97
96
|`spark.sql.hive.metastore.version`|Supported versions: <ul><li>`0.13`</li><li>`1.2`</li><li>`2.1`</li><li>`2.3`</li><li>`3.1`</li></ul> Make sure you use the first 2 parts without the 3rd part|
|`spark.hadoop.hive.synapse.externalmetastore.linkedservice.name`|Name of your linked service|
100
99
101
-
### Configure Spark pool
100
+
### Configure at Spark pool level
102
101
When creating the Spark pool, under **Additional Settings** tab, put below configurations in a text file and upload it in **Apache Spark configuration** section. You can also use the context menu for an existing Spark pool, choose Apache Spark configuration to add these configurations.
103
102
104
103
:::image type="content" source="./media/use-external-metastore/config-spark-pool.png" alt-text="Configure the Spark pool":::
@@ -107,7 +106,7 @@ Update metastore version and linked service name, and save below configs in a te
107
106
108
107
```properties
109
108
spark.sql.hive.metastore.version <your hms version, Make sure you use the first 2 parts without the 3rd part>
110
-
spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked service name to Azure SQL DB>
109
+
spark.hadoop.hive.synapse.externalmetastore.linkedservice.name <your linked service name>
111
110
spark.sql.hive.metastore.jars /opt/hive-metastore/lib-<your hms version, 2 parts>/*:/usr/hdp/current/hadoop-client/lib/*
If you don't want to configure your Spark pool, you can also configure the Spark session in notebook using %%configure magic command. Here is the code. Same configuration can also be applied to a Spark batch job.
121
+
### Configure at Spark session level
122
+
For notebook session, you can also configure the Spark session in notebook using `%%configure` magic command. Here is the code.
124
123
125
124
```json
126
125
%%configure -f
@@ -133,16 +132,18 @@ If you don't want to configure your Spark pool, you can also configure the Spark
133
132
}
134
133
```
135
134
135
+
For batch job, same configuration can also be applied via `SparkConf`.
136
+
136
137
### Run queries to verify the connection
137
138
After all these settings, try listing catalog objects by running below query in Spark notebook to check the connectivity to the external Hive Metastore.
138
139
```python
139
140
spark.sql("show databases").show()
140
141
```
141
142
142
143
## Set up storage connection
143
-
The linked service to Hive metastore database just provides access to Hive catalog metadata. To query the existing tables, you need to set up connection to the storage account that stores the underlying data for your Hive tables as well.
144
+
The linked service to Hive Metastore database just provides access to Hive catalog metadata. To query the existing tables, you need to set up connection to the storage account that stores the underlying data for your Hive tables as well.
144
145
145
-
### Set up connection to ADLS Gen 2
146
+
### Set up connection to Azure Data Lake Storage Gen 2
146
147
#### Workspace primary storage account
147
148
If the underlying data of your Hive tables is stored in the workspace primary storage account, you don't need to do extra settings. It will just work as long as you followed storage setting up instructions during workspace creation.
148
149
@@ -157,7 +158,7 @@ If the underlying data of your Hive tables are stored in Azure Blob storage acco
157
158
:::image type="content" source="./media/use-external-metastore/connect-to-storage-account.png" alt-text="Connect to storage account" border="true":::
158
159
159
160
2. Choose **Azure Blob Storage** and click **Continue**.
160
-
3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark session configuration shortly.
161
+
3. Provide **Name** of the linked service. Record the name of the linked service, this info will be used in Spark configuration shortly.
161
162
4. Select the Azure Blob Storage account. Make sure Authentication method is **Account key**. Currently Spark pool can only access Blob Storage account via account key.
162
163
5.**Test connection** and click **Create**.
163
164
6. After creating the linked service to Blob Storage account, when you run Spark queries, make sure you run below Spark code in the notebook to get access to the the Blob Storage account for the Spark session. Learn more about why you need to do this [here](./apache-spark-secure-credentials-with-tokenlibrary.md).
@@ -178,7 +179,7 @@ After setting up storage connections, you can query the existing tables in the H
178
179
## Known limitations
179
180
180
181
- Synapse Studio object explorer will continue to show objects in managed Synapse metastore instead of the external HMS, we are improving the experience of this.
181
-
-[SQL <-> spark synchronization](../sql/develop-storage-files-spark-tables.md) doesn't work when using external HMS.
182
+
-[SQL <-> Spark synchronization](../sql/develop-storage-files-spark-tables.md) doesn't work when using external HMS.
182
183
- Only Azure SQL Database and Azure Database for MySQL are supported as external Hive Metastore database. Only SQL authorization is supported.
183
184
- Currently Spark only works on external Hive tables and non-transactional/non-ACID managed Hive tables. It doesn't support Hive ACID/transactional tables now.
184
185
- Apache Ranger integration is not supported as of now.
### See below error when query a table stored in ADLS Gen2 account
206
-
```
207
+
```text
207
208
Operation failed: "This request is not authorized to perform this operation using this permission.", 403, HEAD
208
209
```
209
210
210
-
This could happen because the user who run Spark query doesn't have enough access to the underlying storage account. Make sure the users who run Spark queries have**Storage Blob Data Contributor** role on the ADLS Gen2 storage account. This step can be done later after creating the linked service.
211
+
This could happen because the user who runs Spark query doesn't have enough access to the underlying storage account. Make sure the user who runs Spark queries has**Storage Blob Data Contributor** role on the ADLS Gen2 storage account. This step can be done after creating the linked service.
211
212
212
213
### HMS schema related settings
213
214
To avoid changing HMS backend schema/version, following hive configs are set by system by default:
If your HMS version is 1.2.1 or 1.2.2, there's an issue in Hive that claims requiring only 1.2.0 if you turn spark.hadoop.hive.metastore.schema.verification to true. Our suggestion is either you can modify your HMS version to 1.2.0, or overwrite below two configurations to work around:
222
+
If your HMS version is `1.2.1` or `1.2.2`, there's an issue in Hive that claims requiring only `1.2.0` if you turn `spark.hadoop.hive.metastore.schema.verification` to `true`. Our suggestion is either you can modify your HMS version to `1.2.0`, or overwrite below two configurations to work around:
If you need to migrate your HMS version, we recommend using [hive schema tool](https://cwiki.apache.org/confluence/display/Hive/Hive+Schema+Tool). And if the HMS has been used by HDInsight clusters, we suggest using [HDI provided version](../../hdinsight/interactive-query/apache-hive-migrate-workloads.md).
229
230
230
231
### HMS schema change for OSS HMS 3.1
231
-
Synapse aims to works smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not full compatible with the OSS HMS 3.1. So please apply the following manually to your HMS 3.1 if it’s not provisioned by HDI.
232
+
Synapse aims to work smoothly with computes from HDI. However HMS 3.1 in HDI 4.0 is not fully compatible with the OSS HMS 3.1. So please apply the following manually to your HMS 3.1 if it’s not provisioned by HDI.
### When sharing the metastore with HDInsight 4.0 Spark clusters, I cannot see the tables
240
+
### When sharing the metastore with HDInsight 4.0 Spark cluster, I cannot see the tables
240
241
If you want to share the Hive catalog with a spark cluster in HDInsight 4.0, please ensure your property `spark.hadoop.metastore.catalog.default` in Synapse spark aligns with the value in HDInsight spark. The default value for HDI spark is `spark` and the default value for Synapse spark is `hive`.
241
242
242
-
### When sharing the Hive metastore with HDInsight 4.0 Hive clusters, I can list the tables successfully, but only get empty result when I query the table
243
-
As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. By default in HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables.
243
+
### When sharing the Hive Metastore with HDInsight 4.0 Hive cluster, I can list the tables successfully, but only get empty result when I query the table
244
+
As mentioned in the limitations, Synapse Spark pool only supports external hive tables and non-transactional/ACID managed tables, it doesn't support Hive ACID/transactional tables currently. In HDInsight 4.0 Hive clusters, all managed tables are created as ACID/transactional tables by default, that's why you get empty results when querying those tables.
0 commit comments