Skip to content

Commit 6dc6b03

Browse files
Kafka binding documentation (#2474)
* doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * doc for kafka binding * kafka binding doc * kafka binding doc * kafka binding doc
1 parent ab75f0f commit 6dc6b03

File tree

7 files changed

+466
-23
lines changed

7 files changed

+466
-23
lines changed

docs/eventing/samples/kafka/README.md

Lines changed: 24 additions & 17 deletions
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,5 @@
1-
The following examples will help you understand how to use the different Apache Kafka components for Knative.
1+
The following examples will help you understand how to use the different Apache
2+
Kafka components for Knative.
23

34
## Prerequisites
45

@@ -11,7 +12,8 @@ All examples require:
1112

1213
### Setting up Apache Kafka
1314

14-
If you want to run the Apache Kafka cluster on Kubernetes, the simplest option is to install it by using [Strimzi](https://strimzi.io).
15+
If you want to run the Apache Kafka cluster on Kubernetes, the simplest option
16+
is to install it by using [Strimzi](https://strimzi.io).
1517

1618
1. Create a namespace for your Apache Kafka installation, like `kafka`:
1719
```shell
@@ -23,7 +25,7 @@ If you want to run the Apache Kafka cluster on Kubernetes, the simplest option i
2325
| sed 's/namespace: .*/namespace: kafka/' \
2426
| kubectl -n kafka apply -f -
2527
```
26-
1. Describe the size of your Apache Kafka installation, like:
28+
1. Describe the size of your Apache Kafka installation in `kafka.yaml`, like:
2729
```yaml
2830
apiVersion: kafka.strimzi.io/v1beta1
2931
kind: Kafka
@@ -56,33 +58,38 @@ If you want to run the Apache Kafka cluster on Kubernetes, the simplest option i
5658
$ kubectl apply -n kafka -f kafka.yaml
5759
```
5860

59-
This will install a small, non-production, cluster of Apache Kafka. To verify your installation,
60-
check if the pods for Strimzi are all up, in the `kafka` namespace:
61+
This will install a small, non-production, cluster of Apache Kafka. To verify
62+
your installation, check if the pods for Strimzi are all up, in the `kafka`
63+
namespace:
6164

62-
```shell
63-
$ kubectl get pods -n kafka
64-
NAME READY STATUS RESTARTS AGE
65-
my-cluster-entity-operator-65995cf856-ld2zp 3/3 Running 0 102s
66-
my-cluster-kafka-0 2/2 Running 0 2m8s
67-
my-cluster-zookeeper-0 2/2 Running 0 2m39s
68-
my-cluster-zookeeper-1 2/2 Running 0 2m49s
69-
my-cluster-zookeeper-2 2/2 Running 0 2m59s
70-
strimzi-cluster-operator-77555d4b69-sbrt4 1/1 Running 0 3m14s
71-
```
65+
```shell
66+
$ kubectl get pods -n kafka
67+
NAME READY STATUS RESTARTS AGE
68+
my-cluster-entity-operator-65995cf856-ld2zp 3/3 Running 0 102s
69+
my-cluster-kafka-0 2/2 Running 0 2m8s
70+
my-cluster-zookeeper-0 2/2 Running 0 2m39s
71+
my-cluster-zookeeper-1 2/2 Running 0 2m49s
72+
my-cluster-zookeeper-2 2/2 Running 0 2m59s
73+
strimzi-cluster-operator-77555d4b69-sbrt4 1/1 Running 0 3m14s
74+
```
7275

7376
> NOTE: For production ready installs check [Strimzi](https://strimzi.io).
7477
7578
### Installation script
7679

77-
If you want to install the latest version of Strimzi, in just one step, we have a [script](./kafka_setup.sh) for your convenience, which does exactly the same steps that are listed above:
80+
If you want to install the latest version of Strimzi, in just one step, we have
81+
a [script](./kafka_setup.sh) for your convenience, which does exactly the same
82+
steps that are listed above:
7883

7984
```shell
8085
$ ./kafka_setup.sh
8186
```
8287

8388
## Examples of Apache Kafka and Knative
8489

85-
A number of different examples, showing the `KafkaSource` and the `KafkaChannel` can be found here:
90+
A number of different examples, showing the `KafkaSource`, `KafkaChannel` and
91+
`KafkaBinding` can be found here:
8692

8793
- [`KafkaSource` to `Service`](./source/README.md)
8894
- [`KafkaChannel` and Broker](./channel/README.md)
95+
- [`KafkaBinding`](./binding/README.md)
Lines changed: 264 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,264 @@
1+
KafkaBinding is responsible for injecting Kafka bootstrap connection information
2+
into a Kubernetes resource that embed a PodSpec (as `spec.template.spec`). This
3+
enables easy bootstrapping of a Kafka client.
4+
5+
## Create a Job that uses KafkaBinding
6+
7+
In the below example a Kubernetes Job will be using the KafkaBinding to produce
8+
messages on a Kafka Topic, which will be received by the Event Display service
9+
via Kafka Source
10+
11+
### Prerequisites
12+
13+
1. You must ensure that you meet the
14+
[prerequisites listed in the Apache Kafka overview](../README.md).
15+
2. This feature is available from Knative Eventing 0.15+
16+
17+
### Creating a `KafkaSource` source CRD
18+
19+
1. Install the `KafkaSource` sub-component to your Knative cluster:
20+
21+
```
22+
kubectl apply -f https://storage.googleapis.com/knative-releases/eventing-contrib/latest/kafka-source.yaml
23+
24+
```
25+
26+
1. Check that the `kafka-controller-manager-0` pod is running.
27+
```
28+
kubectl get pods --namespace knative-sources
29+
NAME READY STATUS RESTARTS AGE
30+
kafka-controller-manager-0 1/1 Running 0 42m
31+
```
32+
33+
### Create the Event Display service
34+
35+
1. (Optional) Source code for Event Display service
36+
37+
Get the source code of Event Display container image from
38+
[here](https://github.com/knative/eventing-contrib/blob/master/cmd/event_display/main.go)
39+
40+
1. Deploy the Event Display Service via kubectl:
41+
42+
```yaml
43+
apiVersion: serving.knative.dev/v1
44+
kind: Service
45+
metadata:
46+
name: event-display
47+
spec:
48+
template:
49+
spec:
50+
containers:
51+
- image: gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display
52+
```
53+
54+
```
55+
$ kubectl apply --filename event-display.yaml
56+
...
57+
service.serving.knative.dev/event-display created
58+
```
59+
60+
1. (Optional) Deploy the Event Display Service via kn cli:
61+
62+
Alternatively, you can create the knative service using the `kn` cli like
63+
below
64+
65+
```
66+
kn service create event-display --image=gcr.io/knative-releases/github.com/knative/eventing-contrib/cmd/event_display
67+
```
68+
69+
1. Ensure that the Service pod is running. The pod name will be prefixed with
70+
`event-display`.
71+
```
72+
$ kubectl get pods
73+
NAME READY STATUS RESTARTS AGE
74+
event-display-00001-deployment-5d5df6c7-gv2j4 2/2 Running 0 72s
75+
...
76+
```
77+
78+
### Apache Kafka Event Source
79+
80+
1. Modify `event-source.yaml` accordingly with bootstrap servers, topics,
81+
etc...:
82+
83+
```yaml
84+
apiVersion: sources.knative.dev/v1alpha1
85+
kind: KafkaSource
86+
metadata:
87+
name: kafka-source
88+
spec:
89+
consumerGroup: knative-group
90+
bootstrapServers:
91+
- my-cluster-kafka-bootstrap.kafka:9092 #note the kafka namespace
92+
topics:
93+
- logs
94+
sink:
95+
ref:
96+
apiVersion: serving.knative.dev/v1
97+
kind: Service
98+
name: event-display
99+
```
100+
101+
1. Deploy the event source.
102+
```
103+
$ kubectl apply -f event-source.yaml
104+
...
105+
kafkasource.sources.knative.dev/kafka-source created
106+
```
107+
1. Check that the event source pod is running. The pod name will be prefixed
108+
with `kafka-source`.
109+
```
110+
$ kubectl get pods
111+
NAME READY STATUS RESTARTS AGE
112+
kafka-source-xlnhq-5544766765-dnl5s 1/1 Running 0 40m
113+
```
114+
115+
### Kafka Binding Resource
116+
117+
Create the KafkaBinding that will inject kafka bootstrap information into select
118+
`Jobs`:
119+
120+
1. Modify `kafka-binding.yaml` accordingly with bootstrap servers etc...:
121+
122+
```yaml
123+
apiVersion: bindings.knative.dev/v1alpha1
124+
kind: KafkaBinding
125+
metadata:
126+
name: kafka-binding-test
127+
spec:
128+
subject:
129+
apiVersion: batch/v1
130+
kind: Job
131+
selector:
132+
matchLabels:
133+
kafka.topic: "logs"
134+
bootstrapServers:
135+
- my-cluster-kafka-bootstrap.kafka:9092
136+
```
137+
138+
In this case, we will bind any `Job` with the labels `kafka.topic: "logs"`.
139+
140+
### Create Kubernetes Job
141+
142+
1. Source code for kafka-publisher service
143+
144+
Get the source code of kafka-publisher container image from
145+
[here](https://github.com/knative/eventing-contrib/blob/master/test/test_images/kafka-publisher/main.go)
146+
147+
1. Now we will use the kafka-publisher container to send events to kafka topic
148+
when the Job runs.
149+
150+
```yaml
151+
apiVersion: batch/v1
152+
kind: Job
153+
metadata:
154+
labels:
155+
kafka.topic: "logs"
156+
name: kafka-publisher-job
157+
namespace: test-alpha
158+
spec:
159+
backoffLimit: 1
160+
completions: 1
161+
parallelism: 1
162+
template:
163+
metadata:
164+
annotations:
165+
sidecar.istio.io/inject: "false"
166+
spec:
167+
restartPolicy: Never
168+
containers:
169+
- image: docker.io/murugappans/kafka-publisher-1974f83e2ff7c8994707b5e8731528e8@sha256:fd79490514053c643617dc72a43097251fed139c966fd5d131134a0e424882de
170+
env:
171+
- name: KAFKA_TOPIC
172+
value: "logs"
173+
- name: KAFKA_KEY
174+
value: "0"
175+
- name: KAFKA_HEADERS
176+
value: "content-type:application/json"
177+
- name: KAFKA_VALUE
178+
value: '{"msg":"This is a test!"}'
179+
name: kafka-publisher
180+
```
181+
182+
### Verify
183+
184+
1. Ensure the Event Display received the message sent to it by the Event Source.
185+
186+
```
187+
$ kubectl logs --selector='serving.knative.dev/service=event-display' -c user-container
188+
189+
☁️ cloudevents.Event
190+
Validation: valid
191+
Context Attributes,
192+
specversion: 1.0
193+
type: dev.knative.kafka.event
194+
source: /apis/v1/namespaces/default/kafkasources/kafka-source#logs
195+
subject: partition:0#1
196+
id: partition:0/offset:1
197+
time: 2020-05-17T19:45:02.7Z
198+
datacontenttype: application/json
199+
Extensions,
200+
kafkaheadercontenttype: application/json
201+
key: 0
202+
traceparent: 00-f383b779f512358b24ffbf6556a6d6da-cacdbe78ef9b5ad3-00
203+
Data,
204+
{
205+
"msg": "This is a test!"
206+
}
207+
208+
```
209+
210+
## Connecting to a TLS enabled Kafka broker
211+
212+
The KafkaBinding supports TLS and SASL authentication methods. For injecting TLS
213+
authentication, please have the below files
214+
215+
- CA Certificate
216+
- Client Certificate and Key
217+
218+
These files are expected to be in pem format, if it is in other format like jks
219+
, please convert to pem.
220+
221+
1. Create the certificate files as secrets in the namespace where KafkaBinding
222+
is going to be set up
223+
224+
```
225+
$ kubectl create secret generic cacert --from-file=caroot.pem
226+
secret/cacert created
227+
228+
$ kubectl create secret tls kafka-secret --cert=certificate.pem --key=key.pem
229+
secret/key created
230+
231+
```
232+
233+
2. Apply the kafkabinding-tls.yaml, change bootstrapServers accordingly.
234+
```yaml
235+
apiVersion: sources.knative.dev/v1alpha1
236+
kind: KafkaBinding
237+
metadata:
238+
name: kafka-source-with-tls
239+
spec:
240+
subject:
241+
apiVersion: batch/v1
242+
kind: Job
243+
selector:
244+
matchLabels:
245+
kafka.topic: "logs"
246+
net:
247+
tls:
248+
enable: true
249+
cert:
250+
secretKeyRef:
251+
key: tls.crt
252+
name: kafka-secret
253+
key:
254+
secretKeyRef:
255+
key: tls.key
256+
name: kafka-secret
257+
caCert:
258+
secretKeyRef:
259+
key: caroot.pem
260+
name: cacert
261+
consumerGroup: knative-group
262+
bootstrapServers:
263+
- my-secure-kafka-bootstrap.kafka:443
264+
```

0 commit comments

Comments
 (0)