Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,10 @@ All notable changes to this project will be documented in this file.
- Add experimental support for Kafka KRaft mode ([#889]).
- Add experimental support for Kafka `4.1.0` ([#889]).
- Add `prometheus.io/path|port|scheme` annotations to metrics service ([#897]).
- Add `client.properties` to the Kafka configuration config map ([#898]).

Use this file together with the Kafka client shell scripts and preconfigured TLS settings.
Unfortunately, when Kerberos is enabled this file is incomplete and must be edited first before it can be used.

### Changed

Expand All @@ -21,11 +25,15 @@ All notable changes to this project will be documented in this file.
- use the new `server.yaml` for jmx configuration
- update metrics tests
- update monitoring doc
- Replace `kcat` with Kafka client scripts wherever possible ([#898]).

At the moment, `kcat` is still used for liveliness probes and Kerberos tests.

[#889]: https://github.com/stackabletech/kafka-operator/pull/889
[#890]: https://github.com/stackabletech/kafka-operator/pull/890
[#892]: https://github.com/stackabletech/kafka-operator/pull/892
[#897]: https://github.com/stackabletech/kafka-operator/pull/897
[#898]: https://github.com/stackabletech/kafka-operator/pull/898
[#900]: https://github.com/stackabletech/kafka-operator/pull/900

## [25.7.0] - 2025-07-23
Expand Down
55 changes: 30 additions & 25 deletions docs/modules/kafka/examples/getting_started/getting_started.sh
Original file line number Diff line number Diff line change
Expand Up @@ -90,28 +90,33 @@ trap "kill $PORT_FORWARD_PID" EXIT

sleep 15

echo "Creating test data"
# tag::kcat-create-data[]
echo "some test data" > data
# end::kcat-create-data[]

echo "Writing test data"
# tag::kcat-write-data[]
kcat -b localhost:9092 -t test-data-topic -P data
# end::kcat-write-data[]

echo "Reading test data"
# tag::kcat-read-data[]
kcat -b localhost:9092 -t test-data-topic -C -e > read-data.out
# end::kcat-read-data[]

echo "Check contents"
# tag::kcat-check-data[]
grep "some test data" read-data.out
# end::kcat-check-data[]

echo "Cleanup"
# tag::kcat-cleanup-data[]
rm data
rm read-data.out
# end::kcat-cleanup-data[]
echo "Creating test topic test-data-topic"
# tag::create-topic[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-topics.sh \
--create \
--topic test-data-topic \
--partitions 1 \
--bootstrap-server localhost:9092
# end::create-topic[]

echo "Publish test data"
# tag::write-data[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-producer-perf-test.sh \
--producer-props bootstrap.servers=localhost:9092 \
--topic test-data-topic \
--payload-monotonic \
--throughput 1 \
--num-records 5
# end::write-data[]

echo "Consume test data"
# tag::read-data[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server localhost:9092 \
--topic test-data-topic \
--offset earliest \
--partition 0 \
--timeout-ms 1000
# end::read-data[]

echo "Success!"
55 changes: 30 additions & 25 deletions docs/modules/kafka/examples/getting_started/getting_started.sh.j2
Original file line number Diff line number Diff line change
Expand Up @@ -90,28 +90,33 @@ trap "kill $PORT_FORWARD_PID" EXIT

sleep 15

echo "Creating test data"
# tag::kcat-create-data[]
echo "some test data" > data
# end::kcat-create-data[]

echo "Writing test data"
# tag::kcat-write-data[]
kcat -b localhost:9092 -t test-data-topic -P data
# end::kcat-write-data[]

echo "Reading test data"
# tag::kcat-read-data[]
kcat -b localhost:9092 -t test-data-topic -C -e > read-data.out
# end::kcat-read-data[]

echo "Check contents"
# tag::kcat-check-data[]
grep "some test data" read-data.out
# end::kcat-check-data[]

echo "Cleanup"
# tag::kcat-cleanup-data[]
rm data
rm read-data.out
# end::kcat-cleanup-data[]
echo "Creating test topic test-data-topic"
# tag::create-topic[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-topics.sh \
--create \
--topic test-data-topic \
--partitions 1 \
--bootstrap-server localhost:9092
# end::create-topic[]

echo "Publish test data"
# tag::write-data[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-producer-perf-test.sh \
--producer-props bootstrap.servers=localhost:9092 \
--topic test-data-topic \
--payload-monotonic \
--throughput 1 \
--num-records 5
# end::write-data[]

echo "Consume test data"
# tag::read-data[]
kubectl exec -n default simple-kafka-broker-default-0 -c kafka -t -- /stackable/kafka/bin/kafka-console-consumer.sh \
--bootstrap-server localhost:9092 \
--topic test-data-topic \
--offset earliest \
--partition 0 \
--timeout-ms 1000
# end::read-data[]

echo "Success!"
73 changes: 52 additions & 21 deletions docs/modules/kafka/pages/getting_started/first_steps.adoc
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
= First steps
:description: Deploy and verify a Kafka cluster on Kubernetes with Stackable Operators, including ZooKeeper setup and data testing using kcat.
:kcat-install: https://github.com/edenhill/kcat#install
:description: Deploy and verify a Kafka cluster on Kubernetes with Stackable Operators, including ZooKeeper setup and data testing.

After going through the xref:getting_started/installation.adoc[] section and having installed all the operators, you now deploy a Kafka cluster and the required dependencies.
Afterward you can <<_verify_that_it_works, verify that it works>> by producing test data into a topic and consuming it.
Expand Down Expand Up @@ -65,58 +64,90 @@ This creates the actual Kafka instance.

== Verify that it works

Next you produce data into a topic and read it via {kcat-install}[kcat].
Depending on your platform you may need to replace `kafkacat` in the commands below with `kcat`.
Next, use the Kafka client scripts to create a topic and publish and consume data.

First, make sure that all the Pods in the StatefulSets are ready:
The Kafka operator has created a service called `simple-kafka-broker-default-bootstrap`.
This service represents the endpoint clients should initially connect to in order to publish and consume data.
First, make sure that the service exists and it is healthy:

[source,bash]
----
kubectl get statefulset
kubectl describe svc simple-kafka-broker-default-bootstrap
----

The output should show all pods ready:
The output should look somewhat like this:

----
NAME READY AGE
simple-kafka-broker-default 3/3 5m
simple-zk-server-default 3/3 7m
Name: simple-kafka-broker-default-bootstrap
Namespace: default
Labels: app.kubernetes.io/component=broker
app.kubernetes.io/instance=simple-kafka-broker-default-bootstrap
app.kubernetes.io/managed-by=listeners.stackable.tech_listener
app.kubernetes.io/name=listener
app.kubernetes.io/role-group=default
app.kubernetes.io/version=3.9.1-stackable0.0.0-dev
stackable.tech/vendor=Stackable
Annotations: <none>
Selector: listener.stackable.tech/mnt.9555cbb6f38d4b0ca1771e6d83d28e27=simple-kafka-broker-default-bootstrap
Type: NodePort
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.105.88.52
IPs: 10.105.88.52
Port: kafka 9092/TCP
TargetPort: 9092/TCP
NodePort: kafka 32608/TCP
Endpoints: 10.244.4.22:9092,10.244.4.24:9092,10.244.4.23:9092
Session Affinity: None
External Traffic Policy: Local
Internal Traffic Policy: Cluster
Events: <none>
----

Then, create a port-forward for the Kafka Broker:
The output shows that there are three endpoints serviced here.
They correspond to the three broker pods belonging to the Kafka cluster.

Then, create a port-forward on this service:

----
include::example$getting_started/getting_started.sh[tag=port-forwarding]
----

Create a file containing some data:
Now, create a new topic called `test-data-topic`:

----
include::example$getting_started/getting_started.sh[tag=kcat-create-data]
include::example$getting_started/getting_started.sh[tag=create-topic]
----

Write that data:
Use the Kafka performance producer script to send a couple of messages to the topic previously created:

----
include::example$getting_started/getting_started.sh[tag=kcat-write-data]
include::example$getting_started/getting_started.sh[tag=write-data]
----

Read that data:
The output should contain the following line:

----
include::example$getting_started/getting_started.sh[tag=kcat-read-data]
5 records sent, 1.138434 records/sec (0.00 MB/sec), 83.40 ms avg latency, 395.00 ms max latency, 3 ms 50th, 395 ms 95th, 395 ms 99th, 395 ms 99.9th.
----

Check the content:
This confirms that there were five messages sent to the topic and it also displays performance timers.
We are not interested in any performance indicators but appreciate the fact that there were five unique messages that we consume later.

Now let's consume the messages from above:

----
include::example$getting_started/getting_started.sh[tag=kcat-check-data]
include::example$getting_started/getting_started.sh[tag=read-data]
----

And clean up:
The consumer should print the messages in between logging statements

----
include::example$getting_started/getting_started.sh[tag=kcat-cleanup-data]
0
1
2
3
4
----

You successfully created a Kafka cluster and produced and consumed data.
Expand Down
1 change: 0 additions & 1 deletion docs/modules/kafka/pages/getting_started/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@ You need:
* a Kubernetes cluster
* kubectl
* optional: Helm
* https://github.com/edenhill/kcat#install[kcat] for testing

Resource sizing depends on cluster type(s), usage and scope, but as a starting point a minimum of the following resources is recommended for this operator:

Expand Down
2 changes: 1 addition & 1 deletion docs/modules/kafka/pages/index.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ It is commonly used for real-time data processing, data ingestion, event streami

== Getting started

Follow the xref:kafka:getting_started/index.adoc[] which guides you through installing The Stackable Kafka and ZooKeeper operators, setting up ZooKeeper and Kafka and testing your Kafka using `kcat`.
Follow the xref:kafka:getting_started/index.adoc[] which guides you through installing The Stackable Kafka and ZooKeeper operators, setting up ZooKeeper and Kafka and testing your Kafka installation.

== Resources

Expand Down
Loading