UnderReplicatedPartitions in Strimzi kafka #7239
Replies: 1 comment · 1 reply
-
Not sure this code snippet is enough to properly understand it. Is it just complaining about some broker connections? Or what does it do? |
Beta Was this translation helpful? Give feedback.
All reactions
-
Please have a look at below logs one of Zookeeper: 2022-08-24 13:00:02,682 INFO Processing ruok command from /127.0.0.1:41478 (org.apache.zookeeper.server.NettyServerCnxn) [nioEventLoopGroup-4-2] Strimzi Pod logs: 2022-08-24 12:59:49 INFO AbstractOperator:173 - Reconciliation #3633(timer) Kafka(kafka/demo-appsite): Kafka demo-appsite should be created or updated Kafka Pod additional logs: 2022-08-24 13:00:54,869 INFO [ReplicaFetcher replicaId=5, leaderId=4, fetcherId=0] Retrying leaderEpoch request for partition tr-trserv-dag-json.blue-3 as the leader reported an error: UNKNOWN_LEADER_EPOCH (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-4] |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
HI Folks,
We are seeing UnderReplicatedPartitions notifications very frequently in Strimzi kafka
Below are the logs:
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 0 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 0 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 5 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 5 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 1 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 1 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 6 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 6 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 2 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 2 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 7 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 7 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 3 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 3 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 DEBUG [Controller id=1] Topics not in preferred replica for broker 4 Map() (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:56:32,863 TRACE [Controller id=1] Leader imbalance ratio for broker 4 is 0.0 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:57:52,169 INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager) [group-metadata-manager-0]
2022-08-24 12:59:37,686 INFO Unable to read additional data from server sessionid 0x100661e15210001, likely server has closed socket, closing socket connection and attempting reconnect (org.apache.zookeeper.ClientCnxn) [zk-session-expiry-handler0-SendThread(localhost:2181)]
2022-08-24 12:59:39,291 INFO Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) [zk-session-expiry-handler0-SendThread(localhost:2181)]
2022-08-24 12:59:39,292 INFO Socket connection established, initiating session, client: /127.0.0.1:58060, server: localhost/127.0.0.1:2181 (org.apache.zookeeper.ClientCnxn) [zk-session-expiry-handler0-SendThread(localhost:2181)]
2022-08-24 12:59:39,331 INFO Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x100661e15210001, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn) [zk-session-expiry-handler0-SendThread(localhost:2181)]
2022-08-24 12:59:40,174 INFO [Controller id=1] Newly added brokers: , deleted brokers: 5,6, bounced brokers: , all live brokers: 0,1,2,3,4,7 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:59:40,175 INFO [RequestSendThread controllerId=1] Shutting down (kafka.controller.RequestSendThread) [controller-event-thread]
2022-08-24 12:59:40,176 INFO [RequestSendThread controllerId=1] Stopped (kafka.controller.RequestSendThread) [Controller-1-to-broker-5-send-thread]
2022-08-24 12:59:40,176 INFO [RequestSendThread controllerId=1] Shutdown completed (kafka.controller.RequestSendThread) [controller-event-thread]
2022-08-24 12:59:40,185 INFO [RequestSendThread controllerId=1] Shutting down (kafka.controller.RequestSendThread) [controller-event-thread]
2022-08-24 12:59:40,185 INFO [RequestSendThread controllerId=1] Stopped (kafka.controller.RequestSendThread) [Controller-1-to-broker-6-send-thread]
2022-08-24 12:59:40,185 INFO [RequestSendThread controllerId=1] Shutdown completed (kafka.controller.RequestSendThread) [controller-event-thread]
2022-08-24 12:59:40,191 INFO [Controller id=1] Broker failure callback for 5,6 (kafka.controller.KafkaController) [controller-event-thread]
2022-08-24 12:59:40,194 TRACE [Controller id=1 epoch=99] Changed partition trm-storeonce-custdata-current.blue-1 state from OnlinePartition to OfflinePartition (state.change.logger) [controller-event-thread]
2022-08-24 12:59:40,194 TRACE [Controller id=1 epoch=99] Changed partition trm-ss-sr-hires-data-5 state from OnlinePartition to OfflinePartition (state.change.logger) [controller-event-thread]
2022-08-24 12:59:40,194 TRACE [Controller id=1 epoch=99] Changed partition trm-storeonce-processed-storeonce-notification-support.blue-3 state from OnlinePartition to OfflinePartition (state.change.logger) [controller-event-thread]
2022-08-24 12:59:40,194 TRACE [Controller id=1 epoch=99] Changed partition trm-storeonce-service-instance-performance.blue-0 state from OnlinePartition to OfflinePartition (state.change.logger) [controller-event-thread]
2022-08-24 12:59:40,194 TRACE [Controller id=1 epoch=99] Changed partition trm-ss-system-alerts.blue-1 state from OnlinePartition to OfflinePartition
If it happens once in a while, it's acceptable, but it's repeating for every 1hr
Please help us here, thanks in advance for your help.
Beta Was this translation helpful? Give feedback.
All reactions