You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network.
23
-
>
24
-
> For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
21
+
The Kafka API can only be accessed by resources inside the same virtual network. In this quickstart, you access the cluster directly using SSH. To connect other services, networks, or virtual machines to Kafka, you must first create a virtual network and then create the resources within the network. For more information, see the [Connect to Apache Kafka using a virtual network](apache-kafka-connect-vpn-gateway.md) document.
25
22
26
23
If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
27
24
28
25
## Prerequisites
29
26
30
-
* An SSH client. The steps in this document use SSH to connect to the cluster.
31
-
32
-
The `ssh` command is provided by default on Linux, Unix, and macOS systems. On Windows 10, use one of the following methods to install the `ssh` command:
33
-
34
-
* Use the [Azure Cloud Shell](https://docs.microsoft.com/azure/cloud-shell/quickstart). The cloud shell provides the `ssh` command, and can be configured to use either Bash or PowerShell as the shell environment.
35
-
36
-
*[Install the Windows Subsystem for Linux](https://docs.microsoft.com/windows/wsl/install-win10). The Linux distributions available through the Microsoft Store provide the `ssh` command.
37
-
38
-
> [!IMPORTANT]
39
-
> The steps in this document assume that you are using one of the SSH clients mentioned above. If you are using a different SSH client and encounter problems, please consult the documentation for your SSH client.
40
-
>
41
-
> For more information, see the [Use SSH with HDInsight](../hdinsight-hadoop-linux-use-ssh-unix.md) document.
27
+
An SSH client. For more information, see [Connect to HDInsight (Apache Hadoop) using SSH](../hdinsight-hadoop-linux-use-ssh-unix.md).
42
28
43
29
## Create an Apache Kafka cluster
44
30
@@ -61,10 +47,7 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
61
47
62
48

63
49
64
-
3. Select **I agree to the terms and conditions stated above**, select **Pin to dashboard**, and then click **Purchase**.
65
-
66
-
> [!NOTE]
67
-
> It can take up to 20 minutes to create the cluster.
50
+
3. Select **I agree to the terms and conditions stated above**, select **Pin to dashboard**, and then click **Purchase**. It can take up to 20 minutes to create the cluster.
68
51
69
52
## Connect to the cluster
70
53
@@ -78,29 +61,28 @@ If you don't have an Azure subscription, create a [free account](https://azure.m
78
61
79
62
3. When prompted, enter the password for the SSH user.
80
63
81
-
Once connected, you see information similar to the following text:
82
-
83
-
```text
84
-
Authorized uses only. All activity may be monitored and reported.
85
-
Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.13.0-1011-azure x86_64)
86
-
87
-
* Documentation: https://help.ubuntu.com
88
-
* Management: https://landscape.canonical.com
89
-
* Support: https://ubuntu.com/advantage
90
-
91
-
Get cloud support with Ubuntu Advantage Cloud Guest:
92
-
https://www.ubuntu.com/business/services/cloud
93
-
94
-
83 packages can be updated.
95
-
37 updates are security updates.
96
-
97
-
98
-
99
-
Welcome to Kafka on HDInsight.
100
-
101
-
Last login: Thu Mar 29 13:25:27 2018 from 108.252.109.241
102
-
ssuhuser@hn0-mykafk:~$
103
-
```
64
+
Once connected, you see information similar to the following text:
65
+
66
+
```text
67
+
Authorized uses only. All activity may be monitored and reported.
68
+
Welcome to Ubuntu 16.04.4 LTS (GNU/Linux 4.13.0-1011-azure x86_64)
69
+
70
+
* Documentation: https://help.ubuntu.com
71
+
* Management: https://landscape.canonical.com
72
+
* Support: https://ubuntu.com/advantage
73
+
74
+
Get cloud support with Ubuntu Advantage Cloud Guest:
75
+
https://www.ubuntu.com/business/services/cloud
76
+
77
+
83 packages can be updated.
78
+
37 updates are security updates.
79
+
80
+
81
+
Welcome to Kafka on HDInsight.
82
+
83
+
Last login: Thu Mar 29 13:25:27 2018 from 108.252.109.241
84
+
ssuhuser@hn0-mykafk:~$
85
+
```
104
86
105
87
## <a id="getkafkainfo"></a>Get the Apache Zookeeper and Broker host information
106
88
@@ -122,17 +104,14 @@ In this section, you get the host information from the Ambari REST API on the cl
122
104
123
105
When prompted, enter the name of the Kafka cluster.
124
106
125
-
3. To set an environment variable with Zookeeper host information, use the following command:
107
+
3. To set an environment variable with Zookeeper host information, use the command below. The command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy incase one host is unreachable.
When prompted, enter the password for the cluster login account (not the SSH account).
132
114
133
-
> [!NOTE]
134
-
> This command retrieves all Zookeeper hosts, then returns only the first two entries. This is because you want some redundancy incase one host is unreachable.
135
-
136
115
4. To verify that the environment variable is set correctly, use the following command:
137
116
138
117
```bash
@@ -177,15 +156,13 @@ Kafka stores streams of data in *topics*. You can use the `kafka-topics.sh` util
177
156
178
157
* Each partition is replicated across three worker nodes in the cluster.
179
158
180
-
> [!IMPORTANT]
181
-
> If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
159
+
If you created the cluster in an Azure region that provides three fault domains, use a replication factor of 3. Otherwise, use a replication factor of 4.
182
160
183
161
In regions with three fault domains, a replication factor of 3 allows replicas to be spread across the fault domains. In regions with two fault domains, a replication factor of four spreads the replicas evenly across the domains.
184
162
185
163
For information on the number of fault domains in a region, see the [Availability of Linux virtual machines](../../virtual-machines/windows/manage-availability.md#use-managed-disks-for-vms-in-an-availability-set) document.
186
164
187
-
> [!IMPORTANT]
188
-
> Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
165
+
Kafka is not aware of Azure fault domains. When creating partition replicas for topics, it may not distribute replicas properly for high availability.
189
166
190
167
To ensure high availability, use the [Apache Kafka partition rebalance tool](https://github.com/hdinsight/hdinsight-kafka-tools). This tool must be ran from an SSH connection to the head node of your Kafka cluster.
191
168
@@ -244,8 +221,7 @@ To store records into the test topic you created earlier, and then read them usi
244
221
245
222
This command retrieves the records from the topic and displays them. Using `--from-beginning` tells the consumer to start from the beginning of the stream, so all records are retrieved.
246
223
247
-
> [!NOTE]
248
-
> If you are using an older version of Kafka, replace `--bootstrap-server $KAFKABROKERS` with `--zookeeper $KAFKAZKHOSTS`.
224
+
If you are using an older version of Kafka, replace `--bootstrap-server $KAFKABROKERS` with `--zookeeper $KAFKAZKHOSTS`.
249
225
250
226
4. Use __Ctrl + C__ to stop the consumer.
251
227
@@ -269,5 +245,4 @@ To remove the resource group using the Azure portal:
269
245
## Next steps
270
246
271
247
> [!div class="nextstepaction"]
272
-
> [Use Apache Spark with Apache Kafka](../hdinsight-apache-kafka-spark-structured-streaming.md)
273
-
248
+
> [Use Apache Spark with Apache Kafka](../hdinsight-apache-kafka-spark-structured-streaming.md)
0 commit comments