You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
> Any utility or application that creates a valid Public Key Cryptography Standards (PKCS) \#10 request can be used to form the SSL certificate request.
314
+
> Any utility or application that creates a valid Public Key Cryptography Standards (PKCS) \#10 request can be used to form the TLS/SSL certificate request.
315
315
316
316
Verify that the certificate is installed in the computer's **Personal** store:
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-connect-hive-jdbc-driver.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -26,7 +26,7 @@ For more information on the Hive JDBC Interface, see [HiveJDBCInterface](https:/
26
26
27
27
## JDBC connection string
28
28
29
-
JDBC connections to an HDInsight cluster on Azure are made over port 443, and the traffic is secured using SSL. The public gateway that the clusters sit behind redirects the traffic to the port that HiveServer2 is actually listening on. The following connection string shows the format to use for HDInsight:
29
+
JDBC connections to an HDInsight cluster on Azure are made over port 443, and the traffic is secured using TLS/SSL. The public gateway that the clusters sit behind redirects the traffic to the port that HiveServer2 is actually listening on. The following connection string shows the format to use for HDInsight:
Copy file name to clipboardExpand all lines: articles/hdinsight/hadoop/apache-hadoop-use-hive-beeline.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -60,7 +60,7 @@ To find the JDBC URL from Ambari:
60
60
61
61
### Over public or private endpoints
62
62
63
-
When connecting to a cluster using the public or private endpoints, you must provide the cluster login account name (default `admin`) and password. For example, using Beeline from a client system to connect to the `clustername.azurehdinsight.net` address. This connection is made over port `443`, and is encrypted using SSL.
63
+
When connecting to a cluster using the public or private endpoints, you must provide the cluster login account name (default `admin`) and password. For example, using Beeline from a client system to connect to the `clustername.azurehdinsight.net` address. This connection is made over port `443`, and is encrypted using TLS/SSL.
64
64
65
65
Replace `clustername` with the name of your HDInsight cluster. Replace `admin` with the cluster login account for your cluster. For ESP clusters, use the full UPN (for example, [email protected]). Replace `password` with the password for the cluster login account.
Copy file name to clipboardExpand all lines: articles/hdinsight/hdinsight-hadoop-use-blob-storage.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -35,7 +35,7 @@ Sharing one blob container as the default file system for multiple clusters isn'
35
35
36
36
## Access files from within cluster
37
37
38
-
There are several ways you can access the files in Data Lake Storage from an HDInsight cluster. The URI scheme provides unencrypted access (with the *wasb:* prefix) and SSL encrypted access (with *wasbs*). We recommend using *wasbs* wherever possible, even when accessing data that lives inside the same region in Azure.
38
+
There are several ways you can access the files in Data Lake Storage from an HDInsight cluster. The URI scheme provides unencrypted access (with the *wasb:* prefix) and TLS encrypted access (with *wasbs*). We recommend using *wasbs* wherever possible, even when accessing data that lives inside the same region in Azure.
39
39
40
40
***Using the fully qualified name**. With this approach, you provide the full path to the file that you want to access.
description: Set up SSL encryption for communication between Kafka clients and Kafka brokers as well as between Kafka brokers. Set up SSL authentication of clients.
description: Set up TLS encryption for communication between Kafka clients and Kafka brokers as well as between Kafka brokers. Set up SSL authentication of clients.
4
4
author: hrasheed-msft
5
5
ms.reviewer: jasonh
6
6
ms.service: hdinsight
@@ -9,18 +9,18 @@ ms.topic: conceptual
9
9
ms.date: 05/01/2019
10
10
ms.author: hrasheed
11
11
---
12
-
# Set up Secure Sockets Layer (SSL) encryption and authentication for Apache Kafka in Azure HDInsight
12
+
# Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight
13
13
14
-
This article shows you how to set up SSL encryption between Apache Kafka clients and Apache Kafka brokers. It also shows you how to set up authentication of clients (sometimes referred to as two-way SSL).
14
+
This article shows you how to set up Transport Layer Security (TLS) encryption, previously known as Secure Sockets Layer (SSL) encryption, between Apache Kafka clients and Apache Kafka brokers. It also shows you how to set up authentication of clients (sometimes referred to as two-way TLS).
15
15
16
16
> [!Important]
17
-
> There are two clients which you can use for Kafka applications: a Java client and a console client. Only the Java client `ProducerConsumer.java` can use SSL for both producing and consuming. The console producer client `console-producer.sh` does not work with SSL.
17
+
> There are two clients which you can use for Kafka applications: a Java client and a console client. Only the Java client `ProducerConsumer.java` can use TLS for both producing and consuming. The console producer client `console-producer.sh` does not work with TLS.
18
18
19
19
> [!Note]
20
20
> HDInsight Kafka console producer with version 1.1 does not support SSL.
21
21
## Apache Kafka broker setup
22
22
23
-
The Kafka SSL broker setup will use four HDInsight cluster VMs in the following way:
23
+
The Kafka TLS broker setup will use four HDInsight cluster VMs in the following way:
24
24
25
25
* headnode 0 - Certificate Authority (CA)
26
26
* worker node 0, 1, and 2 - brokers
@@ -113,7 +113,7 @@ Use the following detailed instructions to complete the broker setup:
113
113
114
114
```
115
115
116
-
## Update Kafka configuration to use SSL and restart brokers
116
+
## Update Kafka configuration to use TLS and restart brokers
117
117
118
118
You have now set up each Kafka broker with a keystore and truststore, and imported the correct certificates. Next, modify related Kafka configuration properties using Ambari and then restart the Kafka brokers.
119
119
@@ -160,7 +160,7 @@ To complete the configuration modification, do the following steps:
160
160
161
161
## Client setup (without authentication)
162
162
163
-
If you don't need authentication, the summary of the steps to set up only SSL encryption are:
163
+
If you don't need authentication, the summary of the steps to set up only TLS encryption are:
164
164
165
165
1. Sign in to the CA (active head node).
166
166
1. Copy the CA cert to client machine from the CA machine (wn0).
@@ -213,7 +213,7 @@ These steps are detailed in the following code snippets.
213
213
## Client setup (with authentication)
214
214
215
215
> [!Note]
216
-
> The following steps are required only if you are setting up both SSL encryption **and** authentication. If you are only setting up encryption, then see [Client setup without authentication](apache-kafka-ssl-encryption-authentication.md#client-setup-without-authentication).
216
+
> The following steps are required only if you are setting up both TLS encryption **and** authentication. If you are only setting up encryption, then see [Client setup without authentication](apache-kafka-ssl-encryption-authentication.md#client-setup-without-authentication).
217
217
218
218
The following four steps summarize the tasks needed to complete the client setup:
219
219
@@ -296,7 +296,7 @@ The details of each step are given below.
296
296
## Verification
297
297
298
298
> [!Note]
299
-
> If HDInsight 4.0 and Kafka 2.1 is installed, you can use the console producer/consumers to verify your setup. If not, run the Kafka producer on port 9092 and send messages to the topic, and then use the Kafka consumer on port 9093 which uses SSL.
299
+
> If HDInsight 4.0 and Kafka 2.1 is installed, you can use the console producer/consumers to verify your setup. If not, run the Kafka producer on port 9092 and send messages to the topic, and then use the Kafka consumer on port 9093 which uses TLS.
Copy file name to clipboardExpand all lines: articles/hdinsight/kafka/kafka-faq.md
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ Using [Enterprise Security Package (ESP)](../domain-joined/apache-domain-joined-
44
44
45
45
## Is my data encrypted? Can I use my own keys?
46
46
47
-
All Kafka messages on the managed disks are encrypted with [Azure Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md). Data-in-transit (for example, data being transmitted from clients to brokers and the other way around) isn't encrypted by default. It's possible to encrypt such traffic by [setting up SSL on your own](./apache-kafka-ssl-encryption-authentication.md). Additionally, HDInsight allows you to manage their own keys to encrypt the data at rest. See [Customer-managed key disk encryption](../disk-encryption.md), for more information.
47
+
All Kafka messages on the managed disks are encrypted with [Azure Storage Service Encryption (SSE)](../../storage/common/storage-service-encryption.md). Data-in-transit (for example, data being transmitted from clients to brokers and the other way around) isn't encrypted by default. It's possible to encrypt such traffic by [setting up TLS on your own](./apache-kafka-ssl-encryption-authentication.md). Additionally, HDInsight allows you to manage their own keys to encrypt the data at rest. See [Customer-managed key disk encryption](../disk-encryption.md), for more information.
48
48
49
49
## How do I connect clients to my cluster?
50
50
@@ -90,5 +90,5 @@ Use Azure monitor to analyze your [Kafka logs](./apache-kafka-log-analytics-oper
90
90
91
91
## Next steps
92
92
93
-
*[Set up Secure Sockets Layer (SSL) encryption and authentication for Apache Kafka in Azure HDInsight](./apache-kafka-ssl-encryption-authentication.md)
93
+
*[Set up TLS encryption and authentication for Apache Kafka in Azure HDInsight](./apache-kafka-ssl-encryption-authentication.md)
94
94
*[Use MirrorMaker to replicate Apache Kafka topics with Kafka on HDInsight](./apache-kafka-mirroring.md)
Copy file name to clipboardExpand all lines: articles/hdinsight/kafka/migrate-versions.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -63,7 +63,7 @@ The following migration guidance assumes an Apache Kafka 1.0.0 or 1.1.0 cluster
63
63
64
64
To complete the migration, do the following steps:
65
65
66
-
1.**Deploy a new HDInsight 4.0 cluster and clients for test.** Deploy a new HDInsight 4.0 Kafka cluster. If multiple Kafka cluster versions can be selected, it's recommended to select the latest version. After deployment, set some parameters as needed and create a topic with the same name as your existing environment. Also, set SSL and bring-your-own-key (BYOK) encryption as needed. Then check if it works correctly with the new cluster.
66
+
1.**Deploy a new HDInsight 4.0 cluster and clients for test.** Deploy a new HDInsight 4.0 Kafka cluster. If multiple Kafka cluster versions can be selected, it's recommended to select the latest version. After deployment, set some parameters as needed and create a topic with the same name as your existing environment. Also, set TLS and bring-your-own-key (BYOK) encryption as needed. Then check if it works correctly with the new cluster.
67
67
68
68

0 commit comments