You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/spark/apache-spark-connect-to-sql-database.md
+28-28Lines changed: 28 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1,28 +1,28 @@
1
1
---
2
-
title: Use Apache Spark to read and write data to Azure SQL database
3
-
description: Learn how to set up a connection between HDInsight Spark cluster and an Azure SQL database to read data, write data, and stream data into a SQL database
2
+
title: Use Apache Spark to read and write data to Azure SQL Database
3
+
description: Learn how to set up a connection between HDInsight Spark cluster and an Azure SQL Database to read data, write data, and stream data into a SQL database
4
4
author: hrasheed-msft
5
5
ms.author: hrasheed
6
6
ms.reviewer: jasonh
7
7
ms.service: hdinsight
8
-
ms.custom: hdinsightactive
9
8
ms.topic: conceptual
10
-
ms.date: 10/03/2019
9
+
ms.custom: hdinsightactive
10
+
ms.date: 03/05/2020
11
11
---
12
12
13
-
# Use HDInsight Spark cluster to read and write data to Azure SQL database
13
+
# Use HDInsight Spark cluster to read and write data to Azure SQL Database
14
14
15
-
Learn how to connect an Apache Spark cluster in Azure HDInsight with an Azure SQL database and then read, write, and stream data into the SQL database. The instructions in this article use a [Jupyter Notebook](https://jupyter.org/) to run the Scala code snippets. However, you can create a standalone application in Scala or Python and perform the same tasks.
15
+
Learn how to connect an Apache Spark cluster in Azure HDInsight with an Azure SQL Database and then read, write, and stream data into the SQL database. The instructions in this article use a [Jupyter Notebook](https://jupyter.org/) to run the Scala code snippets. However, you can create a standalone application in Scala or Python and perform the same tasks.
16
16
17
17
## Prerequisites
18
18
19
-
* Azure HDInsight Spark cluster*. Follow the instructions at [Create an Apache Spark cluster in HDInsight](apache-spark-jupyter-spark-sql.md).
19
+
* Azure HDInsight Spark cluster. Follow the instructions at [Create an Apache Spark cluster in HDInsight](apache-spark-jupyter-spark-sql.md).
20
20
21
-
* Azure SQL database. Follow the instructions at [Create an Azure SQL database](../../sql-database/sql-database-get-started-portal.md). Make sure you create a database with the sample **AdventureWorksLT** schema and data. Also, make sure you create a server-level firewall rule to allow your client's IP address to access the SQL database on the server. The instructions to add the firewall rule is available in the same article. Once you've created your Azure SQL database, make sure you keep the following values handy. You need them to connect to the database from a Spark cluster.
21
+
* Azure SQL Database. Follow the instructions at [Create an Azure SQL Database](../../sql-database/sql-database-get-started-portal.md). Make sure you create a database with the sample **AdventureWorksLT** schema and data. Also, make sure you create a server-level firewall rule to allow your client's IP address to access the SQL database on the server. The instructions to add the firewall rule is available in the same article. Once you've created your Azure SQL Database, make sure you keep the following values handy. You need them to connect to the database from a Spark cluster.
22
22
23
-
* Server name hosting the Azure SQL database.
24
-
* Azure SQL database name.
25
-
* Azure SQL database admin user name / password.
23
+
* Server name hosting the Azure SQL Database.
24
+
* Azure SQL Database name.
25
+
* Azure SQL Database admin user name / password.
26
26
27
27
* SQL Server Management Studio (SSMS). Follow the instructions at [Use SSMS to connect and query data](../../sql-database/sql-database-connect-query-ssms.md).
28
28
@@ -55,11 +55,11 @@ Start by creating a [Jupyter Notebook](https://jupyter.org/) associated with the
55
55
56
56
You can now start creating your application.
57
57
58
-
## Read data from Azure SQL database
58
+
## Read data from Azure SQL Database
59
59
60
60
In this section, you read data from a table (for example, **SalesLT.Address**) that exists in the AdventureWorks database.
61
61
62
-
1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your Azure SQL database.
62
+
1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your Azure SQL Database.
63
63
64
64
// Declare the values for your Azure SQL database
65
65
@@ -80,7 +80,7 @@ In this section, you read data from a table (for example, **SalesLT.Address**) t
1. Use the snippet below to create a dataframe with the data from a table in your Azure SQL database. In this snippet, we use a **SalesLT.Address** table that is available as part of the **AdventureWorksLT** database. Paste the snippet in a code cell and press **SHIFT + ENTER** to run.
83
+
1. Use the snippet below to create a dataframe with the data from a table in your Azure SQL Database. In this snippet, we use a `SalesLT.Address` table that is available as part of the **AdventureWorksLT** database. Paste the snippet in a code cell and press **SHIFT + ENTER** to run.
84
84
85
85
val sqlTableDF = spark.read.jdbc(jdbc_url, "SalesLT.Address", connectionProperties)
86
86
@@ -100,11 +100,11 @@ In this section, you read data from a table (for example, **SalesLT.Address**) t
In this section, we use a sample CSV file available on the cluster to create a table in Azure SQL database and populate it with data. The sample CSV file (**HVAC.csv**) is available on all HDInsight clusters at `HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv`.
105
+
In this section, we use a sample CSV file available on the cluster to create a table in Azure SQL Database and populate it with data. The sample CSV file (**HVAC.csv**) is available on all HDInsight clusters at `HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv`.
106
106
107
-
1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your Azure SQL database.
107
+
1. In a new Jupyter notebook, in a code cell, paste the following snippet and replace the placeholder values with the values for your Azure SQL Database.
108
108
109
109
// Declare the values for your Azure SQL database
110
110
@@ -135,17 +135,17 @@ In this section, we use a sample CSV file available on the cluster to create a t
135
135
readDf.createOrReplaceTempView("temphvactable")
136
136
spark.sql("create table hvactable_hive as select * from temphvactable")
137
137
138
-
1. Finally, use the hive table to create a table in Azure SQL database. The following snippet creates `hvactable` in Azure SQL database.
138
+
1. Finally, use the hive table to create a table in Azure SQL Database. The following snippet creates `hvactable` in Azure SQL Database.
1. Connect to the Azure SQL database using SSMS and verify that you see a `dbo.hvactable` there.
142
+
1. Connect to the Azure SQL Database using SSMS and verify that you see a `dbo.hvactable` there.
143
143
144
-
a. Start SSMS and connect to the Azure SQL database by providing connection details as shown in the screenshot below.
144
+
a. Start SSMS and connect to the Azure SQL Database by providing connection details as shown in the screenshot below.
145
145
146
146

147
147
148
-
b. From **Object Explorer**, expand the Azure SQL database and the Table node to see the **dbo.hvactable** created.
148
+
b. From **Object Explorer**, expand the Azure SQL Database and the Table node to see the **dbo.hvactable** created.
149
149
150
150

151
151
@@ -155,11 +155,11 @@ In this section, we use a sample CSV file available on the cluster to create a t
155
155
SELECT*from hvactable
156
156
```
157
157
158
-
## Stream data into Azure SQL database
158
+
## Stream data into Azure SQL Database
159
159
160
-
In this section, we stream data into the **hvactable** that you already created in Azure SQL databasein the previous section.
160
+
In this section, we stream data into the `hvactable` that you already created in Azure SQL Databasein the previous section.
161
161
162
-
1. As a first step, make sure there are no records in the **hvactable**. Using SSMS, run the following query on the table.
162
+
1. As a first step, make sure there are no records in the `hvactable`. Using SSMS, run the following query on the table.
163
163
164
164
```sql
165
165
TRUNCATE TABLE [dbo].[hvactable]
@@ -173,17 +173,17 @@ In this section, we stream data into the **hvactable** that you already created
1. We stream data from the **HVAC.csv** into the hvactable. HVAC.csv file is available on the cluster at `/HdiSamples/HdiSamples/SensorSampleData/HVAC/`. In the following snippet, we first get the schema of the data to be streamed. Then, we create a streaming dataframe using that schema. Paste the snippet in a code cell and press **SHIFT + ENTER** to run.
176
+
1. We stream data from the **HVAC.csv** into the `hvactable`. HVAC.csv file is available on the cluster at `/HdiSamples/HdiSamples/SensorSampleData/HVAC/`. In the following snippet, we first get the schema of the data to be streamed. Then, we create a streaming dataframe using that schema. Paste the snippet in a code cell and press **SHIFT + ENTER** to run.
177
177
178
178
val userSchema =spark.read.option("header", "true").csv("wasbs:///HdiSamples/HdiSamples/SensorSampleData/hvac/HVAC.csv").schema
179
179
val readStreamDf =spark.readStream.schema(userSchema).csv("wasbs:///HdiSamples/HdiSamples/SensorSampleData/hvac/")
180
180
readStreamDf.printSchema
181
181
182
-
1. The output shows the schema of **HVAC.csv**. The **hvactable** has the same schema as well. The output lists the columns in the table.
182
+
1. The output shows the schema of **HVAC.csv**. The `hvactable` has the same schema as well. The output lists the columns in the table.
183
183
184
184

185
185
186
-
1. Finally, use the following snippet to read data from the HVAC.csvand stream it into the **hvactable**in Azure SQL database. Paste the snippet in a code cell, replace the placeholder values with the values for your Azure SQL database, and then press **SHIFT + ENTER** to run.
186
+
1. Finally, use the following snippet to read data from the HVAC.csvand stream it into the `hvactable`in Azure SQL Database. Paste the snippet in a code cell, replace the placeholder values with the values for your Azure SQL Database, and then press **SHIFT + ENTER** to run.
187
187
188
188
val WriteToSQLQuery =readStreamDf.writeStream.foreach(new ForeachWriter[Row] {
189
189
var connection:java.sql.Connection = _
@@ -224,7 +224,7 @@ In this section, we stream data into the **hvactable** that you already created
224
224
225
225
var streamingQuery =WriteToSQLQuery.start()
226
226
227
-
1. Verify that the data is being streamed into the **hvactable** by running the following query in SQL Server Management Studio (SSMS). Every time you run the query, it shows the number of rows in the table increasing.
227
+
1. Verify that the data is being streamed into the `hvactable` by running the following query in SQL Server Management Studio (SSMS). Every time you run the query, it shows the number of rows in the table increasing.
0 commit comments