Replies: 2 comments 1 reply
-
Normally, your Kubernetes cluster should have some logging stack that will be collecting and store the logs long term. The container engine normally captures the logs from the STDOUT and STDERR and stores them in files. These files are scanned by tools such as Fluentd, Fluentbit, Logstash, and others and transferred into some kind of database where you can access them (such as OpenSearch / ElasticSearch etc.). The |
Beta Was this translation helpful? Give feedback.
-
Hello Jakub, thanks for explanation. We are using AKS and we know there is a possibility to forward logs to Azure log analytics workspace. But in fact we didn't setup this way of log forward for kafka as LA is quite expensive if there is big ingest (which is expected in case of kafka logs)... Thks Jan |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello, is there any way how to get kafka broker trace logs? I mean logs which I get when I execute:
kubectl logs <kafka-brk-name> -n kafka
We are using K9s for managing/troubleshooting kafka cluster but the logs usually rotates quite fast so we are unable to dive into the history deep enough. Alternatively is there any possibility to redirect such logs to any external tool for further troubleshooting? Can you advise what is the recommended way to work with logs (for troubleshooting purposes)?
Thanks&Regards
Jan
Beta Was this translation helpful? Give feedback.
All reactions