|
| 1 | +# Kafka Template |
| 2 | + |
| 3 | +The java-spring-boot2 `kafka` template provides a consistent way of developing Spring Boot applications which connect to Kafka. This template is an extension of `default` template and uses [spring-kafka](https://spring.io/projects/spring-kafka#overview) to connect to the Kafka instance running on Kubernetes managed by [Strimzi](https://strimzi.io/) Kafka operator. |
| 4 | + |
| 5 | +The `kafka` template provides a `pom.xml` file that references the parent POM defined by the stack, dependencies that enables the Spring boot application to connect to Kafka, simple producer that publishes a message to the Kafka topic and a simple consumer that consumes the messages published on to Kafka topic by the producer. It also provides a basic liveness endpoint, and a set of unit tests that ensure enabled actuator endpoints work properly: `/actuator/health`, `/actuator/metric`, `/actuator/prometheus` and `/actuator/liveness` |
| 6 | + |
| 7 | +## Getting Started |
| 8 | + |
| 9 | +1. Create a new folder in your local directory and initialize it using the Appsody CLI, e.g.: |
| 10 | + |
| 11 | +``` |
| 12 | +mkdir my-project |
| 13 | +cd my-project |
| 14 | +appsody init java-spring-boot2 kafka |
| 15 | +``` |
| 16 | +This will initialize a Spring Boot 2 project using the kafka template. |
| 17 | + |
| 18 | +2. Once your project has been initialized you can then run your application using the following command: |
| 19 | + |
| 20 | +``` |
| 21 | +appsody run --docker-options "--env KAFKA_BOOTSTRAP_SERVERS=${KAFKA_BOOTSTRAP_SERVERS}" |
| 22 | +``` |
| 23 | +E.g: |
| 24 | +``` |
| 25 | +appsody run --network kafka_default --docker-options "--env KAFKA_BOOTSTRAP_SERVERS=kafka:9092" |
| 26 | +``` |
| 27 | +`DOCKER_NETWORK_NAME` is the name of the docker network in which the kafka container is running. |
| 28 | + |
| 29 | +This template expects `KAFKA_BOOTSTRAP_SERVERS` environment variable to be set to addresses of the bootstrap servers of kafka. |
| 30 | + |
| 31 | +This launches a Docker container that will run your application in the foreground, exposing it on port 8080. You should see that the producer publishes message to the kafka topic and the consumer reads it. The application will be restarted automatically when changes are detected. |
| 32 | + |
| 33 | +3. You should be able to access the following endpoints, as they are exposed by your template application by default: |
| 34 | + |
| 35 | +* Health endpoint: http://localhost:8080/actuator/health |
| 36 | +* Liveness endpoint: http://localhost:8080/actuator/liveness |
| 37 | +* Metrics endpoint: http://localhost:8080/actuator/metrics |
| 38 | +* Prometheus endpoint: http://localhost:8080/actuator/prometheus |
| 39 | + |
| 40 | +4. To deploy the application to Kubernetes run the following command: |
| 41 | +``` |
| 42 | +appsody deploy |
| 43 | +``` |
| 44 | +Make sure to add the `KAFKA_BOOTSTRAP_SERVERS` environment variable in the `app-deploy.yaml` before running the above command |
| 45 | + |
| 46 | +``` |
| 47 | +env: |
| 48 | + - name: KAFKA_BOOTSTRAP_SERVERS |
| 49 | + value: ${KAFKA_BOOTSTRAP_SERVERS} |
| 50 | +``` |
| 51 | + |
| 52 | +If you are trying to connect to a Kafka instance managed by Strimzi Kafka operator, the value of `KAFKA_BOOTSTRAP_SERVERS` should be a fully qualified service hostname. |
| 53 | + |
| 54 | +E.g: my-cluster-kafka-bootstrap.strimzi.svc.cluster.local:9092 |
| 55 | + |
| 56 | +* `my-cluster` is the Kafka resource name. |
| 57 | +* `kafka-bootstrap` is the Broker load balancer name. |
| 58 | +* `strimzi` is the namespace in which Kafka instance is deployed. |
| 59 | +* `9092` is the PLAINTEXT port. |
| 60 | + |
| 61 | +5. To deploy the application that connects to kafka managed by Strimzi operator where the brokers support TLS Client authentication |
| 62 | + |
| 63 | +Add the following properties to `application.properties` |
| 64 | + |
| 65 | +``` |
| 66 | +spring.kafka.properties.security.protocol=ssl |
| 67 | +spring.kafka.properties.ssl.protocol=ssl |
| 68 | +spring.kafka.properties.ssl.truststore.location=/etc/secrets/keystores/truststore.p12 |
| 69 | +spring.kafka.properties.ssl.truststore.password=changeit |
| 70 | +spring.kafka.properties.ssl.truststore.type=${TRUSTSTORE_PASSWORD} |
| 71 | +spring.kafka.properties.ssl.keystore.location=/etc/secrets/keystores/keystore.p12 |
| 72 | +spring.kafka.properties.ssl.keystore.password=${KEYSTORE_PASSWORD} |
| 73 | +spring.kafka.properties.ssl.keystore.type=PKCS12 |
| 74 | +spring.kafka.properties.ssl.key.password=${KEYSTORE_PASSWORD} |
| 75 | +spring.kafka.properties.ssl.endpoint.identification.algorithm= |
| 76 | +``` |
| 77 | + |
| 78 | +`TRUSTSTORE_PASSWORD` is the password that you have used when creating the truststore. |
| 79 | + |
| 80 | +`KEYSTORE_PASSWORD` is the password that you have used when creating the keystore. |
| 81 | + |
| 82 | +Next, add the following in the `app-deploy.yaml` under `spec` section |
| 83 | + |
| 84 | +* Add the following volumes |
| 85 | + |
| 86 | +``` |
| 87 | +volumes: |
| 88 | +# emptyDir volume to store the keystore and truststore files so that the application container can eventually read them. |
| 89 | +- emtpyDir: {} |
| 90 | + name: keystore-volume |
| 91 | +# this is the secret that is created when the kafka user is created |
| 92 | +- name: my-user-credentials |
| 93 | + secret: |
| 94 | + secretName: my-user |
| 95 | +# secret that holds CA certificate created by the operator for the brokers |
| 96 | +- name: my-cluster-cluster-ca-cert |
| 97 | + secret: |
| 98 | + secretName: my-cluster-cluster-ca-cert |
| 99 | +``` |
| 100 | +* Volume mount the `keystore-volume` |
| 101 | + |
| 102 | +``` |
| 103 | +volumeMounts: |
| 104 | +- mountPath: /etc/secrets/keystores |
| 105 | + name: keystore-volume |
| 106 | +``` |
| 107 | +* Add `KAFKA_BOOTSTRAP_SERVERS` environment variable. E.g.: |
| 108 | + |
| 109 | +``` |
| 110 | +env: |
| 111 | +- name: KAFKA_BOOTSTRAP_SERVERS |
| 112 | + value: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093 |
| 113 | +``` |
| 114 | +`9093` is the TLS port. |
| 115 | + |
| 116 | +* Add `initContainers` that generate the keystore and truststore which will eventually be used by the application container. |
| 117 | + |
| 118 | +``` |
| 119 | +initContainers: |
| 120 | +- args: |
| 121 | + - -c |
| 122 | + - echo $ca_bundle && csplit -z -f crt- $ca_bundle '/-----BEGIN CERTIFICATE-----/' |
| 123 | + '{*}' && for file in crt-*; do keytool -import -noprompt -keystore $truststore_jks |
| 124 | + -file $file -storepass $password -storetype PKCS12 -alias service-$file; done |
| 125 | + command: |
| 126 | + - /bin/bash |
| 127 | + env: |
| 128 | + - name: ca_bundle |
| 129 | + value: /etc/secrets/my-cluster-cluster-ca-cert/ca.crt |
| 130 | + - name: truststore_jks |
| 131 | + value: /etc/secrets/keystores/truststore.p12 |
| 132 | + - name: password |
| 133 | + value: ${TRUSTSTORE_PASSWORD} |
| 134 | + image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16 |
| 135 | + name: pem-to-truststore |
| 136 | + volumeMounts: |
| 137 | + - mountPath: /etc/secrets/keystores |
| 138 | + name: keystore-volume |
| 139 | + - mountPath: /etc/secrets/my-user |
| 140 | + name: my-user-credentials |
| 141 | + readOnly: true |
| 142 | + - mountPath: /etc/secrets/my-cluster-cluster-ca-cert |
| 143 | + name: my-cluster-cluster-ca-cert |
| 144 | + readOnly: true |
| 145 | +- args: |
| 146 | + - -c |
| 147 | + - openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password |
| 148 | + pass:$password -name "name" |
| 149 | + command: |
| 150 | + - /bin/bash |
| 151 | + env: |
| 152 | + - name: keyfile |
| 153 | + value: /etc/secrets/my-user/user.key |
| 154 | + - name: crtfile |
| 155 | + value: /etc/secrets/my-user/user.crt |
| 156 | + - name: keystore_pkcs12 |
| 157 | + value: /etc/secrets/keystores/keystore.p12 |
| 158 | + - name: password |
| 159 | + value: ${KEYSTORE_PASSWORD} |
| 160 | + image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16 |
| 161 | + name: pem-to-keystore |
| 162 | + volumeMounts: |
| 163 | + - mountPath: /etc/secrets/keystores |
| 164 | + name: keystore-volume |
| 165 | + - mountPath: /etc/secrets/my-user |
| 166 | + name: my-user-credentials |
| 167 | + readOnly: true |
| 168 | +``` |
| 169 | +** Here `my-user` is the kafka user and `my-cluster` is the kafka cluster name. |
0 commit comments