Skip to content
This repository was archived by the owner on Jul 27, 2023. It is now read-only.

Commit 516d028

Browse files
yharish991skoh7645
andauthored
java-spring-boot2: add kafka template (#742)
* java-spring-boot2 - add kafka template * parameterize KafkaTemplate and some clean up * java-spring-boot2 - changes in README.md * java-spring-boot2 - update readme with TLS support * java-spring-boot2 - clean up Co-authored-by: Sandy Koh <sandykoh99@gmail.com>
1 parent 1ed4340 commit 516d028

File tree

13 files changed

+515
-1
lines changed

13 files changed

+515
-1
lines changed

incubator/java-spring-boot2/stack.yaml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
name: Spring Boot®
2-
version: 0.3.28
2+
version: 0.3.29
33
description: Spring Boot using OpenJ9 and Maven
44
license: Apache-2.0
55
language: java
Lines changed: 28 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
/target/*
2+
.appsody-spring-trigger
3+
4+
### STS ###
5+
.apt_generated
6+
.classpath
7+
.factorypath
8+
.project
9+
.settings
10+
.springBeans
11+
.sts4-cache
12+
13+
### IntelliJ IDEA ###
14+
.idea
15+
*.iws
16+
*.iml
17+
*.ipr
18+
19+
### NetBeans ###
20+
/nbproject/private/
21+
/nbbuild/
22+
/dist/
23+
/nbdist/
24+
/.nb-gradle/
25+
/build/
26+
27+
### VS Code ###
28+
.vscode/
Lines changed: 169 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,169 @@
1+
# Kafka Template
2+
3+
The java-spring-boot2 `kafka` template provides a consistent way of developing Spring Boot applications which connect to Kafka. This template is an extension of `default` template and uses [spring-kafka](https://spring.io/projects/spring-kafka#overview) to connect to the Kafka instance running on Kubernetes managed by [Strimzi](https://strimzi.io/) Kafka operator.
4+
5+
The `kafka` template provides a `pom.xml` file that references the parent POM defined by the stack, dependencies that enables the Spring boot application to connect to Kafka, simple producer that publishes a message to the Kafka topic and a simple consumer that consumes the messages published on to Kafka topic by the producer. It also provides a basic liveness endpoint, and a set of unit tests that ensure enabled actuator endpoints work properly: `/actuator/health`, `/actuator/metric`, `/actuator/prometheus` and `/actuator/liveness`
6+
7+
## Getting Started
8+
9+
1. Create a new folder in your local directory and initialize it using the Appsody CLI, e.g.:
10+
11+
```
12+
mkdir my-project
13+
cd my-project
14+
appsody init java-spring-boot2 kafka
15+
```
16+
This will initialize a Spring Boot 2 project using the kafka template.
17+
18+
2. Once your project has been initialized you can then run your application using the following command:
19+
20+
```
21+
appsody run --docker-options "--env KAFKA_BOOTSTRAP_SERVERS=${KAFKA_BOOTSTRAP_SERVERS}"
22+
```
23+
E.g:
24+
```
25+
appsody run --network kafka_default --docker-options "--env KAFKA_BOOTSTRAP_SERVERS=kafka:9092"
26+
```
27+
`DOCKER_NETWORK_NAME` is the name of the docker network in which the kafka container is running.
28+
29+
This template expects `KAFKA_BOOTSTRAP_SERVERS` environment variable to be set to addresses of the bootstrap servers of kafka.
30+
31+
This launches a Docker container that will run your application in the foreground, exposing it on port 8080. You should see that the producer publishes message to the kafka topic and the consumer reads it. The application will be restarted automatically when changes are detected.
32+
33+
3. You should be able to access the following endpoints, as they are exposed by your template application by default:
34+
35+
* Health endpoint: http://localhost:8080/actuator/health
36+
* Liveness endpoint: http://localhost:8080/actuator/liveness
37+
* Metrics endpoint: http://localhost:8080/actuator/metrics
38+
* Prometheus endpoint: http://localhost:8080/actuator/prometheus
39+
40+
4. To deploy the application to Kubernetes run the following command:
41+
```
42+
appsody deploy
43+
```
44+
Make sure to add the `KAFKA_BOOTSTRAP_SERVERS` environment variable in the `app-deploy.yaml` before running the above command
45+
46+
```
47+
env:
48+
- name: KAFKA_BOOTSTRAP_SERVERS
49+
value: ${KAFKA_BOOTSTRAP_SERVERS}
50+
```
51+
52+
If you are trying to connect to a Kafka instance managed by Strimzi Kafka operator, the value of `KAFKA_BOOTSTRAP_SERVERS` should be a fully qualified service hostname.
53+
54+
E.g: my-cluster-kafka-bootstrap.strimzi.svc.cluster.local:9092
55+
56+
* `my-cluster` is the Kafka resource name.
57+
* `kafka-bootstrap` is the Broker load balancer name.
58+
* `strimzi` is the namespace in which Kafka instance is deployed.
59+
* `9092` is the PLAINTEXT port.
60+
61+
5. To deploy the application that connects to kafka managed by Strimzi operator where the brokers support TLS Client authentication
62+
63+
Add the following properties to `application.properties`
64+
65+
```
66+
spring.kafka.properties.security.protocol=ssl
67+
spring.kafka.properties.ssl.protocol=ssl
68+
spring.kafka.properties.ssl.truststore.location=/etc/secrets/keystores/truststore.p12
69+
spring.kafka.properties.ssl.truststore.password=changeit
70+
spring.kafka.properties.ssl.truststore.type=${TRUSTSTORE_PASSWORD}
71+
spring.kafka.properties.ssl.keystore.location=/etc/secrets/keystores/keystore.p12
72+
spring.kafka.properties.ssl.keystore.password=${KEYSTORE_PASSWORD}
73+
spring.kafka.properties.ssl.keystore.type=PKCS12
74+
spring.kafka.properties.ssl.key.password=${KEYSTORE_PASSWORD}
75+
spring.kafka.properties.ssl.endpoint.identification.algorithm=
76+
```
77+
78+
`TRUSTSTORE_PASSWORD` is the password that you have used when creating the truststore.
79+
80+
`KEYSTORE_PASSWORD` is the password that you have used when creating the keystore.
81+
82+
Next, add the following in the `app-deploy.yaml` under `spec` section
83+
84+
* Add the following volumes
85+
86+
```
87+
volumes:
88+
# emptyDir volume to store the keystore and truststore files so that the application container can eventually read them.
89+
- emtpyDir: {}
90+
name: keystore-volume
91+
# this is the secret that is created when the kafka user is created
92+
- name: my-user-credentials
93+
secret:
94+
secretName: my-user
95+
# secret that holds CA certificate created by the operator for the brokers
96+
- name: my-cluster-cluster-ca-cert
97+
secret:
98+
secretName: my-cluster-cluster-ca-cert
99+
```
100+
* Volume mount the `keystore-volume`
101+
102+
```
103+
volumeMounts:
104+
- mountPath: /etc/secrets/keystores
105+
name: keystore-volume
106+
```
107+
* Add `KAFKA_BOOTSTRAP_SERVERS` environment variable. E.g.:
108+
109+
```
110+
env:
111+
- name: KAFKA_BOOTSTRAP_SERVERS
112+
value: my-cluster-kafka-bootstrap.kafka.svc.cluster.local:9093
113+
```
114+
`9093` is the TLS port.
115+
116+
* Add `initContainers` that generate the keystore and truststore which will eventually be used by the application container.
117+
118+
```
119+
initContainers:
120+
- args:
121+
- -c
122+
- echo $ca_bundle && csplit -z -f crt- $ca_bundle '/-----BEGIN CERTIFICATE-----/'
123+
'{*}' && for file in crt-*; do keytool -import -noprompt -keystore $truststore_jks
124+
-file $file -storepass $password -storetype PKCS12 -alias service-$file; done
125+
command:
126+
- /bin/bash
127+
env:
128+
- name: ca_bundle
129+
value: /etc/secrets/my-cluster-cluster-ca-cert/ca.crt
130+
- name: truststore_jks
131+
value: /etc/secrets/keystores/truststore.p12
132+
- name: password
133+
value: ${TRUSTSTORE_PASSWORD}
134+
image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16
135+
name: pem-to-truststore
136+
volumeMounts:
137+
- mountPath: /etc/secrets/keystores
138+
name: keystore-volume
139+
- mountPath: /etc/secrets/my-user
140+
name: my-user-credentials
141+
readOnly: true
142+
- mountPath: /etc/secrets/my-cluster-cluster-ca-cert
143+
name: my-cluster-cluster-ca-cert
144+
readOnly: true
145+
- args:
146+
- -c
147+
- openssl pkcs12 -export -inkey $keyfile -in $crtfile -out $keystore_pkcs12 -password
148+
pass:$password -name "name"
149+
command:
150+
- /bin/bash
151+
env:
152+
- name: keyfile
153+
value: /etc/secrets/my-user/user.key
154+
- name: crtfile
155+
value: /etc/secrets/my-user/user.crt
156+
- name: keystore_pkcs12
157+
value: /etc/secrets/keystores/keystore.p12
158+
- name: password
159+
value: ${KEYSTORE_PASSWORD}
160+
image: registry.access.redhat.com/redhat-sso-7/sso71-openshift:1.1-16
161+
name: pem-to-keystore
162+
volumeMounts:
163+
- mountPath: /etc/secrets/keystores
164+
name: keystore-volume
165+
- mountPath: /etc/secrets/my-user
166+
name: my-user-credentials
167+
readOnly: true
168+
```
169+
** Here `my-user` is the kafka user and `my-cluster` is the kafka cluster name.
Lines changed: 46 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,46 @@
1+
<?xml version="1.0" encoding="UTF-8"?>
2+
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
3+
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4+
<modelVersion>4.0.0</modelVersion>
5+
6+
<parent><!--required parent POM-->
7+
<groupId>{{.stack.parentpomgroup}}</groupId>
8+
<artifactId>{{.stack.parentpomid}}</artifactId>
9+
<version>{{.stack.parentpomrange}}</version>
10+
<relativePath/>
11+
</parent>
12+
13+
<groupId>dev.appsody</groupId>
14+
<artifactId>default-kafka-application</artifactId>
15+
<version>0.0.1-SNAPSHOT</version>
16+
<packaging>jar</packaging>
17+
18+
<!-- versions will come from the parent pom (and included bom):
19+
mvn dependency:tree
20+
mvn dependency:display-ancestors
21+
mvn help:effective-pom | grep '\.version>'
22+
-->
23+
24+
<dependencies>
25+
<!-- From parent:
26+
org.springframework.boot:spring-boot-starter
27+
org.springframework.boot:spring-boot-starter-actuator
28+
org.springframework.boot:spring-boot-starter-test
29+
-->
30+
<dependency>
31+
<groupId>org.springframework.boot</groupId>
32+
<artifactId>spring-boot-starter-web</artifactId>
33+
</dependency>
34+
<dependency>
35+
<groupId>org.springframework.kafka</groupId>
36+
<artifactId>spring-kafka</artifactId>
37+
</dependency>
38+
<dependency>
39+
<groupId>org.springframework.kafka</groupId>
40+
<artifactId>spring-kafka-test</artifactId>
41+
<scope>test</scope>
42+
</dependency>
43+
44+
</dependencies>
45+
46+
</project>
Lines changed: 22 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,22 @@
1+
package application;
2+
3+
import org.springframework.kafka.annotation.KafkaListener;
4+
import org.springframework.stereotype.Service;
5+
6+
import java.util.concurrent.CountDownLatch;
7+
8+
// a simple kafka consumer
9+
@Service
10+
public class KafkaConsumer {
11+
private final CountDownLatch countDownLatch = new CountDownLatch(1);
12+
13+
@KafkaListener(topics = "orders", groupId = "orders-service")
14+
public void receiveString(String message) {
15+
System.out.println("Receiving message = " + message);
16+
countDownLatch.countDown();
17+
}
18+
19+
public CountDownLatch getCountDownLatch() {
20+
return countDownLatch;
21+
}
22+
}
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
/*
2+
* Copyright © 2019 IBM Corp. All rights reserved.
3+
*
4+
* Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file
5+
* except in compliance with the License. You may obtain a copy of the License at
6+
*
7+
* http://www.apache.org/licenses/LICENSE-2.0
8+
*
9+
* Unless required by applicable law or agreed to in writing, software distributed under the
10+
* License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND,
11+
* either express or implied. See the License for the specific language governing permissions
12+
* and limitations under the License.
13+
*/
14+
15+
package application;
16+
17+
import org.springframework.boot.actuate.endpoint.annotation.Endpoint;
18+
import org.springframework.boot.actuate.endpoint.annotation.ReadOperation;
19+
import org.springframework.stereotype.Component;
20+
21+
// Simple custom liveness check
22+
@Endpoint(id = "liveness")
23+
@Component
24+
public class LivenessEndpoint {
25+
26+
@ReadOperation
27+
public String testLiveness() {
28+
return "{\"status\":\"UP\"}";
29+
}
30+
31+
}
Lines changed: 13 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,13 @@
1+
package application;
2+
3+
import org.springframework.boot.SpringApplication;
4+
import org.springframework.boot.autoconfigure.SpringBootApplication;
5+
6+
@SpringBootApplication
7+
public class Main {
8+
9+
public static void main(String[] args) {
10+
SpringApplication.run(Main.class, args);
11+
}
12+
13+
}
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
package application.config;
2+
3+
import org.springframework.beans.factory.annotation.Autowired;
4+
import org.springframework.boot.CommandLineRunner;
5+
import org.springframework.context.annotation.Bean;
6+
import org.springframework.context.annotation.Configuration;
7+
import org.springframework.kafka.core.KafkaTemplate;
8+
9+
import application.KafkaConsumer;
10+
11+
import java.util.UUID;
12+
13+
@Configuration
14+
public class KafkaProducer {
15+
16+
@Autowired
17+
KafkaTemplate<String, String> kafkaTemplate;
18+
19+
private static String TOPIC_NAME = "orders";
20+
21+
// a simple kafka producer that publishes a message to the "orders" topic after the application is initialized
22+
@Bean
23+
public CommandLineRunner kafkaCommandLineRunner(KafkaConsumer kafkaConsumer) {
24+
return args -> {
25+
String data = "testData:" + UUID.randomUUID();
26+
System.out.println("Sending message to kafka = " + data);
27+
kafkaTemplate.send(TOPIC_NAME, data);
28+
kafkaConsumer.getCountDownLatch().await();
29+
};
30+
}
31+
}
Lines changed: 11 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,11 @@
1+
# spring.embedded.kafka.brokers system property is set by Embedded Kafka server to the addresses of the bootstrap servers
2+
spring.kafka.consumer.bootstrap-servers=${spring.embedded.kafka.brokers}
3+
spring.kafka.consumer.group-id=orders-service
4+
spring.kafka.consumer.auto-offset-reset=earliest
5+
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
6+
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
7+
8+
# spring.embedded.kafka.brokers system property is set by Embedded Kafka server to the addresses of the bootstrap servers
9+
spring.kafka.producer.bootstrap-servers=${spring.embedded.kafka.brokers}
10+
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
11+
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
Lines changed: 14 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,14 @@
1+
#enable the actuator endpoints for health, metrics, and prometheus.
2+
management.endpoints.web.exposure.include=health,metrics,prometheus,liveness
3+
opentracing.jaeger.log-spans=false
4+
5+
spring.kafka.consumer.bootstrap-servers=${KAFKA_BOOTSTRAP_SERVERS}
6+
spring.kafka.consumer.group-id=orders-service
7+
spring.kafka.consumer.auto-offset-reset=earliest
8+
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
9+
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
10+
11+
spring.kafka.producer.bootstrap-servers=${KAFKA_BOOTSTRAP_SERVERS}
12+
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
13+
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
14+
spring.kafka.consumer.enable-auto-commit=true

0 commit comments

Comments
 (0)