This project provides you with a Kafka cluster consisting of three brokers. It no longer depends on Zookeeper because it uses Kraft which is available and started from the kafka container.
The scripts are bash and work on linux and mac, but they are essentially just docker-compose commands. So you can also run them on windows.
./up.shstarts the cluster and tails the log in the background (can be safely cancelled via crtl-c without stopping the containers)../down.shstops (and removes).- Use
./clean.shto remove related volumes containing kafka cluster data. ./restart.shwill run docker-compose restart on all containers.
Topic creation is disabled, but can be enabled for the cluster via the KAFKA_AUTO_CREATE_TOPICS_ENABLE: 'true' setting.
To update the (confluent) kafka version and others, edit the .env file.
Remember to comment the PLATFORM env in case you are not running on an arm64 platform.
To manage the cluster, visit http://localhost:8405 to see the akhq website.
Also, the producer container produces messages on the test topic and the consumer container consumes them. You can see the output in the logs of the consumer container (docker-compose logs -f consumer), or via the akhq website.
This setup is not meant for production usage, but is very well suited for local integration testing projects and a means of getting to know kafka.
This project is based (or rather copied from) here. If you think the settings concerning listeners in combination with docker are complex, you are right. This blog post is a nice explanation.