Skip to content

Commit 9fa7cfc

Browse files
Fixes #316: Required Kafka ACLs not documented (#459)
1 parent 7cdf9ed commit 9fa7cfc

File tree

2 files changed

+145
-1
lines changed

2 files changed

+145
-1
lines changed

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -33,3 +33,5 @@ Thumbs.db
3333
.cache-main
3434
.cache-tests
3535
bin
36+
doc/node
37+
doc/node_modules

doc/asciidoc/kafka-ssl/index.adoc

Lines changed: 143 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,27 @@ dbms.jvm.additional=-Djavax.net.debug=ssl:handshake
110110

111111
This line `*dbms.jvm.additional=-Djavax.net.debug=ssl:handshake*` is optional but does help for debugging SSL issues.
112112

113+
[NOTE]
114+
====
115+
When configuring a secure connection between Neo4j and Kafka, and using SASL protocol in particular, pay attention to
116+
use the following properties:
117+
118+
[source, properties]
119+
----
120+
kafka.security.protocol=SASL_SSL
121+
sasl.mechanism=GSSAPI
122+
----
123+
124+
and *not* the following, which has to be used on server side and not client side:
125+
126+
[source, properties]
127+
----
128+
security.inter.broker.protocol=SASL_SSL
129+
sasl.mechanism.inter.broker.protocol=GSSAPI
130+
----
131+
132+
====
133+
113134
=== Testing
114135

115136
After starting Kafka and Neo4j, you can test by creating a Person node in Neo4j and then query the topic as follows:
@@ -129,4 +150,125 @@ security.protocol=SSL
129150
ssl.truststore.location=/home/kafka/security/kafka.client.truststore.jks
130151
ssl.truststore.password=neo4jpassword
131152
ssl.endpoint.identification.algorithm=
132-
----
153+
----
154+
155+
=== Authentication with SASL
156+
157+
You can configure JAAS by providing a JAAS configuration file. To do this, connect to your Kafka server and modify the
158+
`config/server.properties` file. This configuration worked in general, but other configurations without the EXTERNAL
159+
and INTERNAL settings should works as well.
160+
161+
This configuration, for example, is for Kafka on AWS but should work for other configurations.
162+
163+
[source, properties]
164+
----
165+
listeners=EXTERNAL://0.0.0.0:9092,INTERNAL://0.0.0.0:9093,CLIENT://0.0.0.0:9094
166+
listener.security.protocol.map=EXTERNAL:SASL_PLAINTEXT,INTERNAL:PLAINTEXT,CLIENT:SASL_PLAINTEXT
167+
168+
advertised.listeners=EXTERNAL://18.188.84.xxx:9092,INTERNAL://172.31.43.xxx:9093,CLIENT://18.188.84.xxx:9094
169+
170+
zookeeper.connect=18.188.84.xxx:2181
171+
172+
sasl.mechanism.inter.broker.protocol=PLAIN
173+
sasl.enabled.mechanisms=PLAIN
174+
inter.broker.listener.name=INTERNAL
175+
----
176+
177+
On the Neo4j side the following is required. Please consider that in this case, we are connecting to the public
178+
AWS IP address.
179+
180+
. Copy the contents of `~/kafka/conf/kafka_jaas.conf` on your Kafka server and save it to a file on your Neo4j server
181+
(i.e ~/conf/kafka_client_jaas.conf)
182+
183+
. In *neo4j.conf*, add the following:
184+
185+
+
186+
[source, properties]
187+
----
188+
dbms.jvm.additional=-Djava.security.auth.login.config=/Users/davidfauth/neo4j-enterprise-4.0.4_kafka/conf/kafka_client_jaas.conf
189+
kafka.security.protocol=SASL_PLAINTEXT
190+
kafka.sasl.mechanism=PLAIN
191+
----
192+
193+
For more information, please consult the official Confluent documentation at the following links:
194+
195+
* https://docs.confluent.io/platform/current/kafka/authentication_sasl/index.html
196+
197+
=== Authorization with ACL's
198+
199+
To configure use with ACLs, the following configuration properties are required:
200+
201+
[source, properties]
202+
----
203+
kafka.authorizer.class.name=kafka.security.authorizer.AclAuthorizer
204+
kafka.zookeeper.set.acl=true
205+
----
206+
207+
[NOTE]
208+
* `kafka.security.authorizer.AclAuthorizer` (the default Kafka authorizer implementation), was introduced in Apache Kafka 2.4/Confluent Platform 5.4.0. If you are running a previous version, then use SimpleAclAuthorizer (`kafka.security.auth.SimpleAclAuthorizer`). If you are using the Confluent platform, you can use also the LDAP authorizer (please refer to the official Confluent documentation for further details: https://docs.confluent.io/platform/current/security/ldap-authorizer/quickstart.html)
209+
* Please consider that `zookeeper.set.acl` is **false** by default
210+
211+
From the official Kafka documentation you can find that if a resource has no associated ACLs, then no one is allowed to access that resource except super users.
212+
If this is the case in your Kafka cluster, then you have also to add the following:
213+
214+
[source, properties]
215+
----
216+
kafka.allow.everyone.if.no.acl.found=true
217+
----
218+
219+
[NOTE]
220+
Be very careful on using the above property because, as the property name implies, it will allow access to everyone if no acl were found
221+
222+
If super users are specified, then include also:
223+
224+
[source,properties]
225+
----
226+
kafka.super.users=...
227+
----
228+
229+
Moreover, if you change the default user name (principal) mapping rule then you have to add also the following properties:
230+
231+
* If you used SSL encryption, then:
232+
233+
+
234+
[source, properties]
235+
----
236+
kafka.ssl.principal.mapping.rules=...
237+
----
238+
239+
* If you used SASL encryption (probably so, if you have Kerberos environment), then:
240+
241+
+
242+
[source, properties]
243+
----
244+
kafka.sasl.kerberos.principal.to.local.rules=...
245+
----
246+
247+
Furthermore, if you want to ensure that also the brokers communicates with each other using Kerberos,
248+
you have to specify the following property, which is anyway not required for the ACLs purposes:
249+
250+
[source, properties]
251+
----
252+
kafka.security.inter.broker.protocol=SASL_SSL
253+
----
254+
255+
[NOTE]
256+
The last property is `PLAIN` by default
257+
258+
To make the plugin work properly, the following operations must be authorized for Topic and Cluster resource types:
259+
260+
* **Write**, when you want to use the plugin as a Source
261+
* **Read**, when you want to use the plugin as a Sink
262+
* **DescribeConfigs** and **Describe**, because the plugin uses the following 2 Kafka AdminClient API:
263+
** listTopics
264+
** describeCluster
265+
266+
To use streams procedures, the same operations must be authorized (read or write) depending on which of the procedures you wish to use. The permissions required by the procedures and the source/sink operations are the same.
267+
268+
For further details on how to setup and define ACLs on Kafka, please refer to official Confluent Kafka documentation:
269+
270+
* https://docs.confluent.io/platform/current/kafka/authorization.html#kafka-authorization
271+
272+
[NOTE]
273+
This section applies only to the Neo4j Streams plugin and not to the Kafka Connect plugin. This because it's Kafka Connect plugin that takes care about the authorizations.
274+
The only special case for the Kafka Connect plugin is when you use the DLQ. If so, you have to define the **Write** authorization the DLQ producer needs.

0 commit comments

Comments
 (0)