You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Check `config/zookeeper.properties` and `config/server.properties`.
33
33
By default these contain settings for keeping data in `/tmp/`, which works for initial tests,
34
34
but risks that Linux will delete the data.
@@ -37,9 +37,9 @@ For a production setup, change `zookeeper.properties`:
37
37
# Suggest to change this to a location outside of /tmp,
38
38
# for example /var/zookeeper-logs or /home/controls/zookeeper-logs
39
39
dataDir=/tmp/zookeeper
40
-
40
+
41
41
Similarly, change the directory setting in `server.properties`
42
-
42
+
43
43
# Suggest to change this to a location outside of /tmp,
44
44
# for example /var/kafka-logs or /home/controls/kafka-logs
45
45
log.dirs=/tmp/kafka-logs
@@ -85,7 +85,7 @@ for initial tests:
85
85
sh start_kafka.sh
86
86
87
87
# If kafka is started first, with the default zookeeper.connection.timeout of only 6 seconds,
88
-
# it will fail to start and close with a null pointer exception.
88
+
# it will fail to start and close with a null pointer exception.
89
89
# Simply start kafka after zookeeper is running to recover.
90
90
91
91
@@ -104,7 +104,7 @@ for running Zookeeper, Kafka and the alarm server as Linux services:
104
104
sudo systemctl enable kafka.service
105
105
sudo systemctl enable alarm_server.service
106
106
107
-
107
+
108
108
Kafka Demo
109
109
----------
110
110
@@ -141,10 +141,10 @@ but simply meant to learn about Kafka or to test connectivity.
141
141
Stop local instance:
142
142
143
143
# Either <Ctrl-C> in the kafka terminal, then in the zookeeper terminal
144
-
144
+
145
145
# Or:
146
146
sh stop_all.sh
147
-
147
+
148
148
For more, see https://kafka.apache.org/documentation.html
149
149
150
150
@@ -160,7 +160,7 @@ It will create these topics:
160
160
* "Accelerator": Alarm configuration and state (compacted)
161
161
* "AcceleratorCommand": Commands like "acknowledge" from UI to the alarm server (deleted)
162
162
* "AcceleratorTalk": Annunciations (deleted)
163
-
163
+
164
164
The command messages are unidirectional from the alarm UI to the alarm server.
165
165
The talk messages are unidirectional from the alarm server to the alarm annunciator.
166
166
Both command and talk topics are configured to delete older messages, because only new messages are relevant.
@@ -183,8 +183,8 @@ More on this in http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka
183
183
You can track the log cleaner runs via
184
184
185
185
tail -f logs/log-cleaner.log
186
-
187
-
186
+
187
+
188
188
Start Alarm Server
189
189
------------------
190
190
@@ -226,8 +226,8 @@ The messages in the config topic consist of a path to the alarm tree item that i
226
226
Example key:
227
227
228
228
config:/Accelerator/Vacuum/SomePV
229
-
230
-
The message always contains the user name and host name of who is changing the configuration.
229
+
230
+
The message always contains the user name and host name of who is changing the configuration.
231
231
232
232
The full config topic JSON format for a alarm tree leaf:
233
233
@@ -270,7 +270,7 @@ Deleting an item consists of marking a path with a value of null. This "tombston
270
270
For example:
271
271
272
272
config:/path/to/pv : null
273
-
273
+
274
274
This process variable is now marked as deleted. However, there is an issue. We do not know why, or by whom it was deleted. To address this, a message including the missing relevant information is sent before the tombstone is set.
275
275
This message consists of a user name, host name, and a delete message.
276
276
The delete message may offer details on why the item was deleted.
@@ -282,12 +282,12 @@ The config delete message JSON format:
282
282
"host": String,
283
283
"delete": String
284
284
}
285
-
285
+
286
286
The above example of deleting a PV would then look like this:
The message about who deleted the PV would obviously be compacted and deleted itself, but it would be aggregated into the long term topic beforehand thus preserving a record of the deletion.
292
292
______________
293
293
- Type `state:`, State Topic:
@@ -317,7 +317,7 @@ The state topic JSON format for an alarm tree node:
317
317
"mode": String,
318
318
}
319
319
320
-
At minimum, state updates this always contain a "severity".
320
+
At minimum, state updates this always contain a "severity".
321
321
322
322
The "latch" entry will only be present when an alarm that
323
323
is configured to latch is actually latching, i.e. entering an alarm severity
@@ -334,7 +334,7 @@ Example messages that could appear in a state topic:
334
334
In this example, the first message is issued when the alarm latches to the MAJOR severity.
335
335
The following update indicates that the PV's current severity dropped to MINOR, while the alarm severity, message, time and value
336
336
continue to reflect the latched state.
337
-
337
+
338
338
________________
339
339
- Type `command:`, Command Topic:
340
340
@@ -347,7 +347,7 @@ The command topic JSON format:
347
347
"host": String,
348
348
"command": String
349
349
}
350
-
350
+
351
351
An example message that could appear in a command topic:
@@ -406,6 +406,150 @@ it can lock the UI while the internal TreeView code gets to traverse all 'siblin
406
406
This has been observed if there are 10000 or more siblings, i.e. direct child nodes to one node of the alarm tree.
407
407
It can be avoided by for example adding sub-nodes.
408
408
409
+
Encryption, Authentication and Authorization
410
+
--------------------------------------------
411
+
412
+
The default setup as described so far connects to Kafka without encryption nor authentication.
413
+
While this may be acceptable for a closed control system network, you can enable encryption,
414
+
authentication and authorization for extended security.
415
+
Kafka allows many authentication schemes. Below outlines the setup for SSL encryption with
416
+
either two-way TSL authentication or user/password (a.k.a SASL PLAIN).
417
+
418
+
### Prerequistes
419
+
420
+
To enable SSL encryption at least the kafka server requires a SSL certificate.
421
+
You can create your own self signed root CA to sign these certificates.
422
+
Then add this rootCA to a truststore, create a certificate for the server, sign it
423
+
and add it to a keystore.
424
+
Confluent provides a good [step-by-step documentation](https://docs.confluent.io/platform/current/security/security_tutorial.html#creating-ssl-keys-and-certificates).
0 commit comments