Skip to content

Commit c674ca1

Browse files
authored
Merge pull request #2287 from hz-b/kafka-ssl-docu
Kafka encryption documentation
2 parents c1d8b05 + f3057c9 commit c674ca1

File tree

1 file changed

+167
-23
lines changed

1 file changed

+167
-23
lines changed

app/alarm/Readme.md

Lines changed: 167 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ Alarm System
33

44
Update of the alarm system that originally used RDB for configuration,
55
JMS for updates, RDB for persistence of most recent state.
6-
6+
77
This development uses Kafka to handle both, using "Compacted Topics".
88
For an "Accelerator" configuration, a topic of that name holds the configuration and state changes.
9-
When clients subscribe, they receive the most recent configuration and state, and from then on updates.
9+
When clients subscribe, they receive the most recent configuration and state, and from then on updates.
1010

1111

1212
Kafka Installation
@@ -23,12 +23,12 @@ kafka in `/opt/kafka`.
2323
# The 'examples' folder of this project contains some example scripts
2424
# that can be used with a kafka server in the same directory
2525
cd examples
26-
26+
2727
# Use wget, 'curl -O', or web browser
2828
wget http://ftp.wayne.edu/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz
2929
tar -vzxf kafka_2.12-2.3.0.tgz
3030
ln -s kafka_2.12-2.3.0 kafka
31-
31+
3232
Check `config/zookeeper.properties` and `config/server.properties`.
3333
By default these contain settings for keeping data in `/tmp/`, which works for initial tests,
3434
but risks that Linux will delete the data.
@@ -37,9 +37,9 @@ For a production setup, change `zookeeper.properties`:
3737
# Suggest to change this to a location outside of /tmp,
3838
# for example /var/zookeeper-logs or /home/controls/zookeeper-logs
3939
dataDir=/tmp/zookeeper
40-
40+
4141
Similarly, change the directory setting in `server.properties`
42-
42+
4343
# Suggest to change this to a location outside of /tmp,
4444
# for example /var/kafka-logs or /home/controls/kafka-logs
4545
log.dirs=/tmp/kafka-logs
@@ -85,7 +85,7 @@ for initial tests:
8585
sh start_kafka.sh
8686

8787
# If kafka is started first, with the default zookeeper.connection.timeout of only 6 seconds,
88-
# it will fail to start and close with a null pointer exception.
88+
# it will fail to start and close with a null pointer exception.
8989
# Simply start kafka after zookeeper is running to recover.
9090

9191

@@ -104,7 +104,7 @@ for running Zookeeper, Kafka and the alarm server as Linux services:
104104
sudo systemctl enable kafka.service
105105
sudo systemctl enable alarm_server.service
106106

107-
107+
108108
Kafka Demo
109109
----------
110110

@@ -141,10 +141,10 @@ but simply meant to learn about Kafka or to test connectivity.
141141
Stop local instance:
142142

143143
# Either <Ctrl-C> in the kafka terminal, then in the zookeeper terminal
144-
144+
145145
# Or:
146146
sh stop_all.sh
147-
147+
148148
For more, see https://kafka.apache.org/documentation.html
149149

150150

@@ -160,7 +160,7 @@ It will create these topics:
160160
* "Accelerator": Alarm configuration and state (compacted)
161161
* "AcceleratorCommand": Commands like "acknowledge" from UI to the alarm server (deleted)
162162
* "AcceleratorTalk": Annunciations (deleted)
163-
163+
164164
The command messages are unidirectional from the alarm UI to the alarm server.
165165
The talk messages are unidirectional from the alarm server to the alarm annunciator.
166166
Both command and talk topics are configured to delete older messages, because only new messages are relevant.
@@ -183,8 +183,8 @@ More on this in http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka
183183
You can track the log cleaner runs via
184184

185185
tail -f logs/log-cleaner.log
186-
187-
186+
187+
188188
Start Alarm Server
189189
------------------
190190

@@ -226,8 +226,8 @@ The messages in the config topic consist of a path to the alarm tree item that i
226226
Example key:
227227

228228
config:/Accelerator/Vacuum/SomePV
229-
230-
The message always contains the user name and host name of who is changing the configuration.
229+
230+
The message always contains the user name and host name of who is changing the configuration.
231231

232232
The full config topic JSON format for a alarm tree leaf:
233233

@@ -270,7 +270,7 @@ Deleting an item consists of marking a path with a value of null. This "tombston
270270
For example:
271271

272272
config:/path/to/pv : null
273-
273+
274274
This process variable is now marked as deleted. However, there is an issue. We do not know why, or by whom it was deleted. To address this, a message including the missing relevant information is sent before the tombstone is set.
275275
This message consists of a user name, host name, and a delete message.
276276
The delete message may offer details on why the item was deleted.
@@ -282,12 +282,12 @@ The config delete message JSON format:
282282
"host": String,
283283
"delete": String
284284
}
285-
285+
286286
The above example of deleting a PV would then look like this:
287287

288288
config:/path/to/pv : {"user":"user name", "host":"host name", "delete": "Deleting"}
289289
config:/path/to/pv : null
290-
290+
291291
The message about who deleted the PV would obviously be compacted and deleted itself, but it would be aggregated into the long term topic beforehand thus preserving a record of the deletion.
292292
______________
293293
- Type `state:`, State Topic:
@@ -317,7 +317,7 @@ The state topic JSON format for an alarm tree node:
317317
"mode": String,
318318
}
319319

320-
At minimum, state updates this always contain a "severity".
320+
At minimum, state updates this always contain a "severity".
321321

322322
The "latch" entry will only be present when an alarm that
323323
is configured to latch is actually latching, i.e. entering an alarm severity
@@ -334,7 +334,7 @@ Example messages that could appear in a state topic:
334334
In this example, the first message is issued when the alarm latches to the MAJOR severity.
335335
The following update indicates that the PV's current severity dropped to MINOR, while the alarm severity, message, time and value
336336
continue to reflect the latched state.
337-
337+
338338
________________
339339
- Type `command:`, Command Topic:
340340

@@ -347,7 +347,7 @@ The command topic JSON format:
347347
"host": String,
348348
"command": String
349349
}
350-
350+
351351
An example message that could appear in a command topic:
352352

353353
command:/path/to/pv : {"user":"user name", "host":"host name", "command":"acknowledge"}
@@ -406,6 +406,150 @@ it can lock the UI while the internal TreeView code gets to traverse all 'siblin
406406
This has been observed if there are 10000 or more siblings, i.e. direct child nodes to one node of the alarm tree.
407407
It can be avoided by for example adding sub-nodes.
408408

409+
Encryption, Authentication and Authorization
410+
--------------------------------------------
411+
412+
The default setup as described so far connects to Kafka without encryption nor authentication.
413+
While this may be acceptable for a closed control system network, you can enable encryption,
414+
authentication and authorization for extended security.
415+
Kafka allows many authentication schemes. Below outlines the setup for SSL encryption with
416+
either two-way TSL authentication or user/password (a.k.a SASL PLAIN).
417+
418+
### Prerequistes
419+
420+
To enable SSL encryption at least the kafka server requires a SSL certificate.
421+
You can create your own self signed root CA to sign these certificates.
422+
Then add this rootCA to a truststore, create a certificate for the server, sign it
423+
and add it to a keystore.
424+
Confluent provides a good [step-by-step documentation](https://docs.confluent.io/platform/current/security/security_tutorial.html#creating-ssl-keys-and-certificates).
425+
Here is a short version.
426+
427+
Create the root CA
428+
```
429+
openssl req -new -x509 -keyout rootCAKey.pem -out rootCACert.pem -days 365
430+
```
431+
432+
Add it to a truststore
433+
```
434+
keytool -keystore kafka.truststore.jks -alias CARoot -importcert -file rootCACert.pem
435+
```
436+
437+
Create a certificate for the server (your name should be the FQDN) and export the certificate signing request:
438+
```
439+
keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -genkey
440+
keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file server.csr
441+
```
442+
Sign the csr:
443+
```
444+
openssl x509 -req -CA rootCACert.pem -CAkey rootCAKey.pem -in server.csr -out serverCert.pem -days 365 -CAcreateserial
445+
```
446+
447+
Import the signed certificate and the root CA into the keystore:
448+
```
449+
keytool -keystore kafka.server.keystore.jks -alias localhost -importcert -file serverCert.pem
450+
keytool -keystore kafka.server.keystore.jks -alias CARoot -importcert -file rootCACert.pem
451+
```
452+
453+
If you want two-way TSL authentication repeat the certificate creation for the clients
454+
so that you also have a `kafka.client.keystore.jks` file
455+
456+
457+
### Configure Kafka
458+
459+
In `/opt/kafka/config/server.properties` add an SSL and/or SASL_SSL listener like:
460+
```
461+
listeners=PLAINTEXT://:9092,SSL://:9093,SASL_SSL://:9094
462+
```
463+
SSL will use SSL encryption and possibly two-way authentication (clients having their own certificates).
464+
SASL_SSL will use SSL encryption and SASL authentication, which we will configure below for username/password.
465+
You may also remove the PLAINTEXT listner if you want to disallow unencrypted communication.
466+
467+
In `/opt/kafka/config/server.properties` add the SSL configuration
468+
```
469+
# If you want the brokers to authenticate to each other with SASL, use SASL_SSL here
470+
security.inter.broker.protocol=SSL
471+
472+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
473+
ssl.truststore.password=<truststore-pw>
474+
ssl.keystore.location=/opt/kafka/config/kafka.server.keystore.jks
475+
ssl.keystore.password=<server-keystore-pw>
476+
ssl.key.password=<ssl-key-pw>
477+
478+
# uncomment if clients must provide certificates (two-way TLS)
479+
#ssl.client.auth=required
480+
481+
# Below configures SASL authentication, remove if not needed
482+
sasl.enabled.mechanisms=PLAIN
483+
sasl.mechanism.inter.broker.protocol=PLAIN
484+
485+
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
486+
username="admin" \
487+
password="admin-secret" \
488+
user_admin="admin-secret" \
489+
user_kafkaclient1="kafkaclient1-secret";
490+
491+
```
492+
493+
Restart Kafka for these settings to take effect.
494+
495+
### Configure CS-Studio UI, Alarm Server, Alarm Logger
496+
497+
Create a `kafka.properties` file with the following content.
498+
For SSL:
499+
```
500+
security.protocol=SSL
501+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
502+
ssl.truststore.password=<truststore-pw>
503+
# Uncomment these for SSL-authentication (two-way TLS)
504+
#ssl.keystore.location=/opt/kafka/config/kafka.client.keystore.jks
505+
#ssl.keystore.password=<client-keystore-pw>
506+
#ssl.key.password=<ssl-key-pw>
507+
```
508+
509+
For SSL with SASL:
510+
```
511+
sasl.mechanism=PLAIN
512+
security.protocol=SASL_SSL
513+
514+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
515+
ssl.truststore.password=client
516+
517+
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
518+
username="kafkaclient1" \
519+
password="kafkaclient1-secret";
520+
```
521+
522+
Adjust the port of the kafka server in your phoebus settings and preferably
523+
use the FQDN instead of `localhost` for SSL connections. Otherwise certificate
524+
validation might fail.
525+
Edit the preferences to add
526+
```
527+
org.phoebus.applications.alarm/kafka_properties=kafka.properties
528+
```
529+
or pass it with `-kafka_properties kafka.properties` to the service.
530+
531+
### Authorization
532+
533+
With authenticated clients you could then enable authorization for fine grained control.
534+
In your kafka server add to `/opt/kafka/config/server.properties`:
535+
536+
```
537+
# enable the authorizer
538+
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
539+
# default to no restrictions
540+
allow.everyone.if.no.acl.found=true
541+
#set brokers as superusers
542+
super.users=User:broker.your-accelerator.org,User:admin
543+
```
544+
545+
Then run for example
546+
```
547+
./kafka-acls.sh --bootstrap-server broker.your-accelerator.org:9093 --command-config ../config/client.properties --add --allow-principal User:* --operation read --topic Accelerator --topic AcceleratorCommand --topic AcceleratorTalk
548+
./kafka-acls.sh --bootstrap-server broker.your-accelerator.org:9093 --command-config ../config/client.properties --add --allow-principal User:special-client.your-accelerator.org --operation read --operation write --topic Accelerator --topic AcceleratorCommand --topic AcceleratorTalk
549+
```
550+
to allow anybody to see the active alarms, but only the special-client to acknowledge them and to change the configuration.
551+
The `../config/client.properties` must have credentails to authenticate the client as a super user.
552+
So, admin or broker.your-accelerator.org in this case.
409553

410554

411555
Issues
@@ -415,11 +559,11 @@ In earlier versions of Kafka, the log cleaner sometimes failed to compact the lo
415559
The file `kafka/logs/log-cleaner.log` would not show any log-cleaner action.
416560
The workaround was to top the alarm server, alarm clients, kafka, then restart them.
417561
When functional, the file `kafka/logs/log-cleaner.log` shows periodic compaction like this:
418-
562+
419563
[2018-06-01 15:01:01,652] INFO Starting the log cleaner (kafka.log.LogCleaner)
420564
[2018-06-01 15:01:16,697] INFO Cleaner 0: Beginning cleaning of log Accelerator-0. (kafka.log.LogCleaner)
421565
...
422566
Start size: 0.1 MB (414 messages)
423567
End size: 0.1 MB (380 messages)
424568
8.9% size reduction (8.2% fewer messages)
425-
569+

0 commit comments

Comments
 (0)