Skip to content

Commit 2d42f3a

Browse files
committed
Merge remote-tracking branch 'origin/master' into elastic-8-2-alarm-logger
2 parents 9d8e085 + 753733c commit 2d42f3a

File tree

24 files changed

+1069
-247
lines changed

24 files changed

+1069
-247
lines changed

app/alarm/Readme.md

Lines changed: 179 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@ Alarm System
33

44
Update of the alarm system that originally used RDB for configuration,
55
JMS for updates, RDB for persistence of most recent state.
6-
6+
77
This development uses Kafka to handle both, using "Compacted Topics".
88
For an "Accelerator" configuration, a topic of that name holds the configuration and state changes.
9-
When clients subscribe, they receive the most recent configuration and state, and from then on updates.
9+
When clients subscribe, they receive the most recent configuration and state, and from then on updates.
1010

1111

1212
Kafka Installation
@@ -23,12 +23,12 @@ kafka in `/opt/kafka`.
2323
# The 'examples' folder of this project contains some example scripts
2424
# that can be used with a kafka server in the same directory
2525
cd examples
26-
26+
2727
# Use wget, 'curl -O', or web browser
2828
wget http://ftp.wayne.edu/apache/kafka/2.3.0/kafka_2.12-2.3.0.tgz
2929
tar -vzxf kafka_2.12-2.3.0.tgz
3030
ln -s kafka_2.12-2.3.0 kafka
31-
31+
3232
Check `config/zookeeper.properties` and `config/server.properties`.
3333
By default these contain settings for keeping data in `/tmp/`, which works for initial tests,
3434
but risks that Linux will delete the data.
@@ -37,9 +37,9 @@ For a production setup, change `zookeeper.properties`:
3737
# Suggest to change this to a location outside of /tmp,
3838
# for example /var/zookeeper-logs or /home/controls/zookeeper-logs
3939
dataDir=/tmp/zookeeper
40-
40+
4141
Similarly, change the directory setting in `server.properties`
42-
42+
4343
# Suggest to change this to a location outside of /tmp,
4444
# for example /var/kafka-logs or /home/controls/kafka-logs
4545
log.dirs=/tmp/kafka-logs
@@ -48,9 +48,9 @@ Similarly, change the directory setting in `server.properties`
4848
Kafka depends on Zookeeper. By default, Kafka will quit if it cannot connect to Zookeeper within 6 seconds.
4949
When the Linux host boots up, this may not be long enough to allow Zookeeper to start.
5050

51-
# Timeout in ms for connecting to zookeeper defaults to 6000ms.
52-
# Suggest a much longer time (5 minutes)
53-
zookeeper.connection.timeout.ms=300000
51+
# Timeout in ms for connecting to zookeeper defaults to 6000ms.
52+
# Suggest a much longer time (5 minutes)
53+
zookeeper.connection.timeout.ms=300000
5454

5555
By default, Kafka will automatically create topics.
5656
This means you could accidentally start an alarm server for a non-existing configuration.
@@ -60,11 +60,13 @@ Best disable auto-topic-creation, create topics on purpose with the correct sett
6060
and have alarm tools that try to access a non-existing configuration fail.
6161

6262
# Suggest to add this to prevent automatic topic creation,
63-
auto.create.topics.enable=false
63+
auto.create.topics.enable=false
6464

6565
If the following "First steps" generate errors of the type
6666

6767
WARN Error while fetching metadata with correlation id 39 : .. LEADER_NOT_AVAILABLE
68+
or
69+
ERROR ..TimeoutException: Timed out waiting for a node assignment
6870

6971
then define the host name in `config/server.properties`.
7072
For tests, you can use localhost:
@@ -83,7 +85,7 @@ for initial tests:
8385
sh start_kafka.sh
8486

8587
# If kafka is started first, with the default zookeeper.connection.timeout of only 6 seconds,
86-
# it will fail to start and close with a null pointer exception.
88+
# it will fail to start and close with a null pointer exception.
8789
# Simply start kafka after zookeeper is running to recover.
8890

8991

@@ -102,7 +104,7 @@ for running Zookeeper, Kafka and the alarm server as Linux services:
102104
sudo systemctl enable kafka.service
103105
sudo systemctl enable alarm_server.service
104106

105-
107+
106108
Kafka Demo
107109
----------
108110

@@ -111,13 +113,13 @@ It is not required for the alarm system setup
111113
but simply meant to learn about Kafka or to test connectivity.
112114

113115
# Create new topic
114-
kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --create --replication-factor 1 --partitions 1 --topic test
116+
kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --create --replication-factor 1 --partitions 1 --topic test
115117

116118
# Topic info
117-
kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --list
118-
kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe
119-
kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
120-
kafka/bin/kafka-configs.sh --zookeeper localhost:2181 --entity-type topics --describe
119+
kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list
120+
kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe
121+
kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic test
122+
kafka/bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type topics --describe
121123

122124
# Produce messages for topic (no key)
123125
kafka/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
@@ -133,16 +135,16 @@ but simply meant to learn about Kafka or to test connectivity.
133135
kafka/bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --property print.key=true --property key.separator=": " --topic test --from-beginning
134136

135137
# Delete topic
136-
kafka/bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
138+
kafka/bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic test
137139

138140

139141
Stop local instance:
140142

141143
# Either <Ctrl-C> in the kafka terminal, then in the zookeeper terminal
142-
144+
143145
# Or:
144146
sh stop_all.sh
145-
147+
146148
For more, see https://kafka.apache.org/documentation.html
147149

148150

@@ -158,7 +160,7 @@ It will create these topics:
158160
* "Accelerator": Alarm configuration and state (compacted)
159161
* "AcceleratorCommand": Commands like "acknowledge" from UI to the alarm server (deleted)
160162
* "AcceleratorTalk": Annunciations (deleted)
161-
163+
162164
The command messages are unidirectional from the alarm UI to the alarm server.
163165
The talk messages are unidirectional from the alarm server to the alarm annunciator.
164166
Both command and talk topics are configured to delete older messages, because only new messages are relevant.
@@ -181,8 +183,8 @@ More on this in http://www.shayne.me/blog/2015/2015-06-25-everything-about-kafka
181183
You can track the log cleaner runs via
182184

183185
tail -f logs/log-cleaner.log
184-
185-
186+
187+
186188
Start Alarm Server
187189
------------------
188190

@@ -224,8 +226,8 @@ The messages in the config topic consist of a path to the alarm tree item that i
224226
Example key:
225227

226228
config:/Accelerator/Vacuum/SomePV
227-
228-
The message always contains the user name and host name of who is changing the configuration.
229+
230+
The message always contains the user name and host name of who is changing the configuration.
229231

230232
The full config topic JSON format for a alarm tree leaf:
231233

@@ -268,7 +270,7 @@ Deleting an item consists of marking a path with a value of null. This "tombston
268270
For example:
269271

270272
config:/path/to/pv : null
271-
273+
272274
This process variable is now marked as deleted. However, there is an issue. We do not know why, or by whom it was deleted. To address this, a message including the missing relevant information is sent before the tombstone is set.
273275
This message consists of a user name, host name, and a delete message.
274276
The delete message may offer details on why the item was deleted.
@@ -280,12 +282,12 @@ The config delete message JSON format:
280282
"host": String,
281283
"delete": String
282284
}
283-
285+
284286
The above example of deleting a PV would then look like this:
285287

286288
config:/path/to/pv : {"user":"user name", "host":"host name", "delete": "Deleting"}
287289
config:/path/to/pv : null
288-
290+
289291
The message about who deleted the PV would obviously be compacted and deleted itself, but it would be aggregated into the long term topic beforehand thus preserving a record of the deletion.
290292
______________
291293
- Type `state:`, State Topic:
@@ -315,7 +317,7 @@ The state topic JSON format for an alarm tree node:
315317
"mode": String,
316318
}
317319

318-
At minimum, state updates this always contain a "severity".
320+
At minimum, state updates this always contain a "severity".
319321

320322
The "latch" entry will only be present when an alarm that
321323
is configured to latch is actually latching, i.e. entering an alarm severity
@@ -332,7 +334,7 @@ Example messages that could appear in a state topic:
332334
In this example, the first message is issued when the alarm latches to the MAJOR severity.
333335
The following update indicates that the PV's current severity dropped to MINOR, while the alarm severity, message, time and value
334336
continue to reflect the latched state.
335-
337+
336338
________________
337339
- Type `command:`, Command Topic:
338340

@@ -345,7 +347,7 @@ The command topic JSON format:
345347
"host": String,
346348
"command": String
347349
}
348-
350+
349351
An example message that could appear in a command topic:
350352

351353
command:/path/to/pv : {"user":"user name", "host":"host name", "command":"acknowledge"}
@@ -404,6 +406,150 @@ it can lock the UI while the internal TreeView code gets to traverse all 'siblin
404406
This has been observed if there are 10000 or more siblings, i.e. direct child nodes to one node of the alarm tree.
405407
It can be avoided by for example adding sub-nodes.
406408

409+
Encryption, Authentication and Authorization
410+
--------------------------------------------
411+
412+
The default setup as described so far connects to Kafka without encryption nor authentication.
413+
While this may be acceptable for a closed control system network, you can enable encryption,
414+
authentication and authorization for extended security.
415+
Kafka allows many authentication schemes. Below outlines the setup for SSL encryption with
416+
either two-way TSL authentication or user/password (a.k.a SASL PLAIN).
417+
418+
### Prerequistes
419+
420+
To enable SSL encryption at least the kafka server requires a SSL certificate.
421+
You can create your own self signed root CA to sign these certificates.
422+
Then add this rootCA to a truststore, create a certificate for the server, sign it
423+
and add it to a keystore.
424+
Confluent provides a good [step-by-step documentation](https://docs.confluent.io/platform/current/security/security_tutorial.html#creating-ssl-keys-and-certificates).
425+
Here is a short version.
426+
427+
Create the root CA
428+
```
429+
openssl req -new -x509 -keyout rootCAKey.pem -out rootCACert.pem -days 365
430+
```
431+
432+
Add it to a truststore
433+
```
434+
keytool -keystore kafka.truststore.jks -alias CARoot -importcert -file rootCACert.pem
435+
```
436+
437+
Create a certificate for the server (your name should be the FQDN) and export the certificate signing request:
438+
```
439+
keytool -keystore kafka.server.keystore.jks -alias localhost -keyalg RSA -genkey
440+
keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file server.csr
441+
```
442+
Sign the csr:
443+
```
444+
openssl x509 -req -CA rootCACert.pem -CAkey rootCAKey.pem -in server.csr -out serverCert.pem -days 365 -CAcreateserial
445+
```
446+
447+
Import the signed certificate and the root CA into the keystore:
448+
```
449+
keytool -keystore kafka.server.keystore.jks -alias localhost -importcert -file serverCert.pem
450+
keytool -keystore kafka.server.keystore.jks -alias CARoot -importcert -file rootCACert.pem
451+
```
452+
453+
If you want two-way TSL authentication repeat the certificate creation for the clients
454+
so that you also have a `kafka.client.keystore.jks` file
455+
456+
457+
### Configure Kafka
458+
459+
In `/opt/kafka/config/server.properties` add an SSL and/or SASL_SSL listener like:
460+
```
461+
listeners=PLAINTEXT://:9092,SSL://:9093,SASL_SSL://:9094
462+
```
463+
SSL will use SSL encryption and possibly two-way authentication (clients having their own certificates).
464+
SASL_SSL will use SSL encryption and SASL authentication, which we will configure below for username/password.
465+
You may also remove the PLAINTEXT listner if you want to disallow unencrypted communication.
466+
467+
In `/opt/kafka/config/server.properties` add the SSL configuration
468+
```
469+
# If you want the brokers to authenticate to each other with SASL, use SASL_SSL here
470+
security.inter.broker.protocol=SSL
471+
472+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
473+
ssl.truststore.password=<truststore-pw>
474+
ssl.keystore.location=/opt/kafka/config/kafka.server.keystore.jks
475+
ssl.keystore.password=<server-keystore-pw>
476+
ssl.key.password=<ssl-key-pw>
477+
478+
# uncomment if clients must provide certificates (two-way TLS)
479+
#ssl.client.auth=required
480+
481+
# Below configures SASL authentication, remove if not needed
482+
sasl.enabled.mechanisms=PLAIN
483+
sasl.mechanism.inter.broker.protocol=PLAIN
484+
485+
listener.name.sasl_ssl.plain.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
486+
username="admin" \
487+
password="admin-secret" \
488+
user_admin="admin-secret" \
489+
user_kafkaclient1="kafkaclient1-secret";
490+
491+
```
492+
493+
Restart Kafka for these settings to take effect.
494+
495+
### Configure CS-Studio UI, Alarm Server, Alarm Logger
496+
497+
Create a `kafka.properties` file with the following content.
498+
For SSL:
499+
```
500+
security.protocol=SSL
501+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
502+
ssl.truststore.password=<truststore-pw>
503+
# Uncomment these for SSL-authentication (two-way TLS)
504+
#ssl.keystore.location=/opt/kafka/config/kafka.client.keystore.jks
505+
#ssl.keystore.password=<client-keystore-pw>
506+
#ssl.key.password=<ssl-key-pw>
507+
```
508+
509+
For SSL with SASL:
510+
```
511+
sasl.mechanism=PLAIN
512+
security.protocol=SASL_SSL
513+
514+
ssl.truststore.location=/opt/kafka/config/kafka.truststore.jks
515+
ssl.truststore.password=client
516+
517+
sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required \
518+
username="kafkaclient1" \
519+
password="kafkaclient1-secret";
520+
```
521+
522+
Adjust the port of the kafka server in your phoebus settings and preferably
523+
use the FQDN instead of `localhost` for SSL connections. Otherwise certificate
524+
validation might fail.
525+
Edit the preferences to add
526+
```
527+
org.phoebus.applications.alarm/kafka_properties=kafka.properties
528+
```
529+
or pass it with `-kafka_properties kafka.properties` to the service.
530+
531+
### Authorization
532+
533+
With authenticated clients you could then enable authorization for fine grained control.
534+
In your kafka server add to `/opt/kafka/config/server.properties`:
535+
536+
```
537+
# enable the authorizer
538+
authorizer.class.name=kafka.security.authorizer.AclAuthorizer
539+
# default to no restrictions
540+
allow.everyone.if.no.acl.found=true
541+
#set brokers as superusers
542+
super.users=User:broker.your-accelerator.org,User:admin
543+
```
544+
545+
Then run for example
546+
```
547+
./kafka-acls.sh --bootstrap-server broker.your-accelerator.org:9093 --command-config ../config/client.properties --add --allow-principal User:* --operation read --topic Accelerator --topic AcceleratorCommand --topic AcceleratorTalk
548+
./kafka-acls.sh --bootstrap-server broker.your-accelerator.org:9093 --command-config ../config/client.properties --add --allow-principal User:special-client.your-accelerator.org --operation read --operation write --topic Accelerator --topic AcceleratorCommand --topic AcceleratorTalk
549+
```
550+
to allow anybody to see the active alarms, but only the special-client to acknowledge them and to change the configuration.
551+
The `../config/client.properties` must have credentails to authenticate the client as a super user.
552+
So, admin or broker.your-accelerator.org in this case.
407553

408554

409555
Issues
@@ -413,11 +559,11 @@ In earlier versions of Kafka, the log cleaner sometimes failed to compact the lo
413559
The file `kafka/logs/log-cleaner.log` would not show any log-cleaner action.
414560
The workaround was to top the alarm server, alarm clients, kafka, then restart them.
415561
When functional, the file `kafka/logs/log-cleaner.log` shows periodic compaction like this:
416-
562+
417563
[2018-06-01 15:01:01,652] INFO Starting the log cleaner (kafka.log.LogCleaner)
418564
[2018-06-01 15:01:16,697] INFO Cleaner 0: Beginning cleaning of log Accelerator-0. (kafka.log.LogCleaner)
419565
...
420566
Start size: 0.1 MB (414 messages)
421567
End size: 0.1 MB (380 messages)
422568
8.9% size reduction (8.2% fewer messages)
423-
569+

app/display/editor/src/main/java/org/csstudio/display/builder/editor/properties/RulesDialog.java

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,7 @@
8181
public class RulesDialog extends Dialog<List<RuleInfo>>
8282
{
8383
/** Expression info as property-based item for table */
84-
private abstract static class ExprItem<T>
84+
protected abstract static class ExprItem<T>
8585
{
8686
final protected StringProperty boolExp = new SimpleStringProperty();
8787
final protected SimpleObjectProperty<Node> field = new SimpleObjectProperty<>();
@@ -223,7 +223,7 @@ public static <T> ExprItem<?> makeNewFromOld(
223223
}
224224

225225
/** Modifiable RuleInfo */
226-
private static class RuleItem
226+
protected static class RuleItem
227227
{
228228
public List<ExprItem<?>> expressions;
229229
public List<PVTableItem> pvs;

0 commit comments

Comments
 (0)