You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: articles/hdinsight/apache-kafka-spark-structured-streaming-cosmosdb.md
+28-76Lines changed: 28 additions & 76 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -2,12 +2,12 @@
2
2
title: Apache Spark & Apache Kafka with Cosmos DB - Azure HDInsight
3
3
description: Learn how to use Apache Spark Structured Streaming to read data from Apache Kafka and then store it into Azure Cosmos DB. In this example, you stream data using a Jupyter notebook from Spark on HDInsight.
4
4
author: hrasheed-msft
5
+
ms.author: hrasheed
5
6
ms.reviewer: jasonh
6
7
ms.service: hdinsight
7
-
ms.custom: hdinsightactive
8
8
ms.topic: conceptual
9
-
ms.date: 09/04/2019
10
-
ms.author: hrasheed
9
+
ms.custom: hdinsightactive
10
+
ms.date: 11/18/2019
11
11
---
12
12
13
13
# Use Apache Spark Structured Streaming with Apache Kafka and Azure Cosmos DB
@@ -27,7 +27,7 @@ Spark structured streaming is a stream processing engine built on Spark SQL. It
27
27
28
28
## Create the clusters
29
29
30
-
Apache Kafka on HDInsight does not provide access to the Kafka brokers over the public internet. Anything that talks to Kafka must be in the same Azure virtual network as the nodes in the Kafka cluster. For this example, both the Kafka and Spark clusters are located in an Azure virtual network. The following diagram shows how communication flows between the clusters:
30
+
Apache Kafka on HDInsight doesn't provide access to the Kafka brokers over the public internet. Anything that talks to Kafka must be in the same Azure virtual network as the nodes in the Kafka cluster. For this example, both the Kafka and Spark clusters are located in an Azure virtual network. The following diagram shows how communication flows between the clusters:
31
31
32
32

33
33
@@ -50,53 +50,36 @@ While you can create an Azure virtual network, Kafka, and Spark clusters manuall
50
50
51
51
* A Spark on HDInsight 3.6 cluster.
52
52
53
-
* An Azure Virtual Network, which contains the HDInsight clusters.
54
-
55
-
> [!NOTE]
56
-
> The virtual network created by the template uses the 10.0.0.0/16 address space.
53
+
* An Azure Virtual Network, which contains the HDInsight clusters. The virtual network created by the template uses the 10.0.0.0/16 address space.
57
54
58
55
* An Azure Cosmos DB SQL API database.
59
56
60
-
> [!IMPORTANT]
61
-
> The structured streaming notebook used in this example requires Spark on HDInsight 3.6. If you use an earlier version of Spark on HDInsight, you receive errors when using the notebook.
62
-
63
-
2. Use the following information to populate the entries on the **Custom deployment** section:
***Subscription**: Select your Azure subscription.
68
-
69
-
***Resource group**: Create a group or select an existing one. This group contains the HDInsight cluster.
70
-
71
-
***Location**: Select a location geographically close to you.
72
-
73
-
***Cosmos DB Account Name**: This value is used as the name for the Cosmos DB account.
74
-
75
-
***Base Cluster Name**: This value is used as the base name for the Spark and Kafka clusters. For example, entering **myhdi** creates a Spark cluster named __spark-myhdi__ and a Kafka cluster named **kafka-myhdi**.
76
-
77
-
***Cluster Version**: The HDInsight cluster version.
78
-
79
-
> [!IMPORTANT]
80
-
> This example is tested with HDInsight 3.6, and may not work with other cluster types.
81
-
82
-
***Cluster Login User Name**: The admin user name for the Spark and Kafka clusters.
83
-
84
-
***Cluster Login Password**: The admin user password for the Spark and Kafka clusters.
57
+
> [!IMPORTANT]
58
+
> The structured streaming notebook used in this example requires Spark on HDInsight 3.6. If you use an earlier version of Spark on HDInsight, you receive errors when using the notebook.
85
59
86
-
***SSH User Name**: The SSH user to create for the Spark and Kafka clusters.
60
+
1. Use the following information to populate the entries on the **Custom deployment** section:
87
61
88
-
***SSH Password**: The password for the SSH user for the Spark and Kafka clusters.
62
+
|Property |Value |
63
+
|---|---|
64
+
|Subscription|Select your Azure subscription.|
65
+
|Resource group|Create a group or select an existing one. This group contains the HDInsight cluster.|
66
+
|Cosmos DB Account Name|This value is used as the name for the Cosmos DB account. The name can only contain lowercase letters, numbers, and the hyphen (-) character. It must be between 3-31 characters in length.|
67
+
|Base Cluster Name|This value is used as the base name for the Spark and Kafka clusters. For example, entering **myhdi** creates a Spark cluster named __spark-myhdi__ and a Kafka cluster named **kafka-myhdi**.|
68
+
|Cluster Version|The HDInsight cluster version. This example is tested with HDInsight 3.6, and may not work with other cluster types.|
69
+
|Cluster Login User Name|The admin user name for the Spark and Kafka clusters.|
70
+
|Cluster Login Password|The admin user password for the Spark and Kafka clusters.|
71
+
|Ssh User Name|The SSH user to create for the Spark and Kafka clusters.|
72
+
|Ssh Password|The password for the SSH user for the Spark and Kafka clusters.|
89
73
90
-
3. Read the **Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
4. Finally, select**Purchase**. It takes about 20 minutes to create the clusters.
76
+
1. Read the**Terms and Conditions**, and then select **I agree to the terms and conditions stated above**.
93
77
94
-
> [!IMPORTANT]
95
-
> It may take up to 45 minutes to create the clusters, virtual network, and Cosmos DB account.
78
+
1. Finally, select **Purchase**. It may take up to 45 minutes to create the clusters, virtual network, and Cosmos DB account.
96
79
97
80
## Create the Cosmos DB database and collection
98
81
99
-
The project used in this document stores data in Cosmos DB. Before running the code, you must first create a _database_ and _collection_ in your Cosmos DB instance. You must also retrieve the document endpoint and the _key_ used to authenticate requests to Cosmos DB.
82
+
The project used in this document stores data in Cosmos DB. Before running the code, you must first create a _database_ and _collection_ in your Cosmos DB instance. You must also retrieve the document endpoint and the _key_ used to authenticate requests to Cosmos DB.
100
83
101
84
One way to do this is to use the [Azure CLI](https://docs.microsoft.com/cli/azure/?view=azure-cli-latest). The following script will create a database named `kafkadata` and a collection named `kafkacollection`. It then returns the primary key.
102
85
@@ -114,15 +97,16 @@ databaseName='kafkadata'
114
97
collectionName='kafkacollection'
115
98
116
99
# Create the database
117
-
az cosmosdb database create --name $name --db-name $databaseName --resource-group $resourceGroupName
az cosmosdb show --name $name --resource-group $resourceGroupName --query documentEndpoint
123
107
124
108
# Get the primary key
125
-
az cosmosdb list-keys --name $name --resource-group $resourceGroupName --query primaryMasterKey
109
+
az cosmosdb keys list --name $name --resource-group $resourceGroupName --type keys
126
110
```
127
111
128
112
The document endpoint and primary key information is similar to the following text:
@@ -137,38 +121,6 @@ The document endpoint and primary key information is similar to the following te
137
121
> [!IMPORTANT]
138
122
> Save the endpoint and key values, as they are needed in the Jupyter Notebooks.
139
123
140
-
## Get the Apache Kafka brokers
141
-
142
-
The code in this example connects to Kafka broker hosts in the Kafka cluster. To find the addresses of the two Kafka broker hosts, use the following PowerShell or Bash example:
143
-
144
-
```powershell
145
-
$creds = Get-Credential -UserName "admin" -Message "Enter the HDInsight login"
146
-
$clusterName = Read-Host -Prompt "Enter the Kafka cluster name"
Save this information, as it is used in the following sections of this document.
171
-
172
124
## Get the notebooks
173
125
174
126
The code for the example described in this document is available at [https://github.com/Azure-Samples/hdinsight-spark-scala-kafka-cosmosdb](https://github.com/Azure-Samples/hdinsight-spark-scala-kafka-cosmosdb).
@@ -199,7 +151,7 @@ From the [Jupyter Notebook](https://jupyter.org/) home page, select the __Stream
199
151
200
152
## Next steps
201
153
202
-
Now that you have learned how to use Apache Spark Structured Streaming, see the following documents to learn more about working with Apache Spark, Apache Kafka, and Azure Cosmos DB:
154
+
Now that you've learned how to use Apache Spark Structured Streaming, see the following documents to learn more about working with Apache Spark, Apache Kafka, and Azure Cosmos DB:
203
155
204
156
*[How to use Apache Spark streaming (DStream) with Apache Kafka](hdinsight-apache-spark-with-kafka.md).
205
157
*[Start with Jupyter Notebook and Apache Spark on HDInsight](spark/apache-spark-jupyter-spark-sql.md)
0 commit comments