-
-
Notifications
You must be signed in to change notification settings - Fork 1.8k
Description
Module
Kafka
Testcontainers version
1.20.5
Using the latest Testcontainers version?
Yes
Host OS
Windows
Host Arch
x86
Docker version
Client:
Cloud integration: v1.0.35+desktop.10
Version: 25.0.3
API version: 1.44
Go version: go1.21.6
Git commit: 4debf41
Built: Tue Feb 6 21:13:02 2024
OS/Arch: windows/amd64
Context: default
Server: Docker Desktop 4.27.2 (137060)
Engine:
Version: 25.0.3
API version: 1.44 (minimum version 1.24)
Go version: go1.21.6
Git commit: f417435
Built: Tue Feb 6 21:14:25 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.28
GitCommit: ae07eda36dd25f8a1b98dfbf587313b99c0190bb
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0What happened?
We have a TestContainers Setup for Kafka with Schema Registry as follows:
package de.XXX
import org.springframework.context.annotation.Configuration
import org.testcontainers.containers.GenericContainer
import org.testcontainers.containers.KafkaContainer
import org.testcontainers.containers.Network
import org.testcontainers.containers.wait.strategy.Wait
import org.testcontainers.utility.DockerImageName
@Configuration
class KafkaContainerConfig {
init {
val network = Network.newNetwork()
val kafkaImage =
DockerImageName
.parse("my_personal_repo/cp-kafka:7.8.0")
.asCompatibleSubstituteFor("confluentinc/cp-kafka")
val kafka = KafkaContainer(kafkaImage).withNetwork(network)
kafka.start()
val schemaRegistryImage =
DockerImageName
.parse("my_personal_repo/cp-schema-registry:7.8.0")
.asCompatibleSubstituteFor("confluentinc/cp-schema-registry")
val schemaRegistry: GenericContainer<*> =
GenericContainer(schemaRegistryImage)
.withNetwork(network)
.withExposedPorts(8081)
.withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry")
.withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081")
.withEnv(
"SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS",
"PLAINTEXT://" + kafka.networkAliases[0] + ":9092",
).waitingFor(Wait.forHttp("/subjects").forStatusCode(200))
schemaRegistry.start()
}
}
Due to a Spring-Boot-Update to 3.4.3 Testcontainers is now used in version 1.20.5, and org.testcontainers.containers.KafkaContainer is deprecated. We therefore changed to ConfluentKafkaContainer:
import org.springframework.context.annotation.Configuration
import org.testcontainers.containers.GenericContainer
import org.testcontainers.containers.Network
import org.testcontainers.containers.wait.strategy.Wait
import org.testcontainers.kafka.ConfluentKafkaContainer
import org.testcontainers.utility.DockerImageName
@Configuration
class KafkaContainerConfig {
init {
val network = Network.newNetwork()
val kafkaImage =
DockerImageName
.parse("my_cp-kafka-registry_image_name:myVersion")
.asCompatibleSubstituteFor("confluentinc/cp-kafka")
val kafka = ConfluentKafkaContainer(kafkaImage).withNetwork(network)
kafka.start()
val schemaRegistryImage =
DockerImageName
.parse("my_cp-schema-registry_image_name:myVersion")
.asCompatibleSubstituteFor("confluentinc/cp-schema-registry")
val schemaRegistry: GenericContainer<*> =
GenericContainer(schemaRegistryImage)
.withNetwork(network)
.withExposedPorts(8081)
.withEnv("SCHEMA_REGISTRY_HOST_NAME", "schema-registry")
.withEnv("SCHEMA_REGISTRY_LISTENERS", "http://0.0.0.0:8081")
.withEnv(
"SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS",
"PLAINTEXT://" + kafka.networkAliases[0] + ":9092",
).waitingFor(Wait.forHttp("/subjects").forStatusCode(200))
schemaRegistry.start()
Now Schema Registry fails to connect to Kafka, see the Log Outputs below.
To my understanding, Kafka Bootstrap correctly announces the listeners - both for the internal acces, and the host-access:
cp-kafka log:
2025-03-11 15:15:57 advertised.listeners = PLAINTEXT://localhost:51721,BROKER://495d7a51aff8:9093
however, schema-registry seems to only try to access the address for the host-system-access:
cp-schema-registry:
2025-03-11 15:16:42 [2025-03-11 14:16:42,853] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:51721) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient)
I discovered that when switching from KafkaContainer to ConfluentKafkaContainer, the announced listener hosts change, with KafkaContainer the docker-network-internal access has port 9092, with ConfluentKafkaContainer it has port 9093.
Do you have any suggestions on how to fix this issue?
Relevant log output
cp-kafka container:
2025-03-11 15:15:57 [2025-03-11 14:15:57,875] INFO KafkaConfig values:
2025-03-11 15:15:57 advertised.listeners = PLAINTEXT://localhost:51721,BROKER://495d7a51aff8:9093
2025-03-11 15:15:57 alter.config.policy.class.name = null
2025-03-11 15:15:57 alter.log.dirs.replication.quota.window.num = 11
2025-03-11 15:15:57 alter.log.dirs.replication.quota.window.size.seconds = 1
2025-03-11 15:15:57 authorizer.class.name =
2025-03-11 15:15:57 auto.create.topics.enable = true
2025-03-11 15:15:57 auto.include.jmx.reporter = true
2025-03-11 15:15:57 auto.leader.rebalance.enable = true
2025-03-11 15:15:57 background.threads = 10
2025-03-11 15:15:57 broker.heartbeat.interval.ms = 2000
2025-03-11 15:15:57 broker.id = 1
2025-03-11 15:15:57 broker.id.generation.enable = true
2025-03-11 15:15:57 broker.rack = null
2025-03-11 15:15:57 broker.session.timeout.ms = 9000
2025-03-11 15:15:57 client.quota.callback.class = null
2025-03-11 15:15:57 compression.gzip.level = -1
2025-03-11 15:15:57 compression.lz4.level = 9
2025-03-11 15:15:57 compression.type = producer
2025-03-11 15:15:57 compression.zstd.level = 3
2025-03-11 15:15:57 connection.failed.authentication.delay.ms = 100
2025-03-11 15:15:57 connections.max.idle.ms = 600000
2025-03-11 15:15:57 connections.max.reauth.ms = 0
2025-03-11 15:15:57 control.plane.listener.name = null
2025-03-11 15:15:57 controlled.shutdown.enable = true
2025-03-11 15:15:57 controlled.shutdown.max.retries = 3
2025-03-11 15:15:57 controlled.shutdown.retry.backoff.ms = 5000
2025-03-11 15:15:57 controller.listener.names = CONTROLLER
2025-03-11 15:15:57 controller.quorum.append.linger.ms = 25
2025-03-11 15:15:57 controller.quorum.bootstrap.servers = []
2025-03-11 15:15:57 controller.quorum.election.backoff.max.ms = 1000
2025-03-11 15:15:57 controller.quorum.election.timeout.ms = 1000
2025-03-11 15:15:57 controller.quorum.fetch.timeout.ms = 2000
2025-03-11 15:15:57 controller.quorum.request.timeout.ms = 2000
2025-03-11 15:15:57 controller.quorum.retry.backoff.ms = 20
2025-03-11 15:15:57 controller.quorum.voters = [1@localhost:9094]
2025-03-11 15:15:57 controller.quota.window.num = 11
2025-03-11 15:15:57 controller.quota.window.size.seconds = 1
2025-03-11 15:15:57 controller.socket.timeout.ms = 30000
2025-03-11 15:15:57 create.topic.policy.class.name = null
2025-03-11 15:15:57 default.replication.factor = 1
2025-03-11 15:15:57 delegation.token.expiry.check.interval.ms = 3600000
2025-03-11 15:15:57 delegation.token.expiry.time.ms = 86400000
2025-03-11 15:15:57 delegation.token.master.key = null
2025-03-11 15:15:57 delegation.token.max.lifetime.ms = 604800000
2025-03-11 15:15:57 delegation.token.secret.key = null
2025-03-11 15:15:57 delete.records.purgatory.purge.interval.requests = 1
2025-03-11 15:15:57 delete.topic.enable = true
2025-03-11 15:15:57 early.start.listeners = null
2025-03-11 15:15:57 eligible.leader.replicas.enable = false
2025-03-11 15:15:57 fetch.max.bytes = 57671680
2025-03-11 15:15:57 fetch.purgatory.purge.interval.requests = 1000
2025-03-11 15:15:57 group.consumer.assignors = [org.apache.kafka.coordinator.group.assignor.UniformAssignor, org.apache.kafka.coordinator.group.assignor.RangeAssignor]
2025-03-11 15:15:57 group.consumer.heartbeat.interval.ms = 5000
2025-03-11 15:15:57 group.consumer.max.heartbeat.interval.ms = 15000
2025-03-11 15:15:57 group.consumer.max.session.timeout.ms = 60000
2025-03-11 15:15:57 group.consumer.max.size = 2147483647
2025-03-11 15:15:57 group.consumer.migration.policy = disabled
2025-03-11 15:15:57 group.consumer.min.heartbeat.interval.ms = 5000
2025-03-11 15:15:57 group.consumer.min.session.timeout.ms = 45000
2025-03-11 15:15:57 group.consumer.session.timeout.ms = 45000
2025-03-11 15:15:57 group.coordinator.append.linger.ms = 10
2025-03-11 15:15:57 group.coordinator.new.enable = false
2025-03-11 15:15:57 group.coordinator.rebalance.protocols = [classic]
2025-03-11 15:15:57 group.coordinator.threads = 1
2025-03-11 15:15:57 group.initial.rebalance.delay.ms = 0
2025-03-11 15:15:57 group.max.session.timeout.ms = 1800000
2025-03-11 15:15:57 group.max.size = 2147483647
2025-03-11 15:15:57 group.min.session.timeout.ms = 6000
2025-03-11 15:15:57 initial.broker.registration.timeout.ms = 60000
2025-03-11 15:15:57 inter.broker.listener.name = BROKER
2025-03-11 15:15:57 inter.broker.protocol.version = 3.8-IV0
2025-03-11 15:15:57 kafka.metrics.polling.interval.secs = 10
2025-03-11 15:15:57 kafka.metrics.reporters = []
2025-03-11 15:15:57 leader.imbalance.check.interval.seconds = 300
2025-03-11 15:15:57 leader.imbalance.per.broker.percentage = 10
2025-03-11 15:15:57 listener.security.protocol.map = BROKER:PLAINTEXT,PLAINTEXT:PLAINTEXT,CONTROLLER:PLAINTEXT
2025-03-11 15:15:57 listeners = PLAINTEXT://0.0.0.0:9092,CONTROLLER://0.0.0.0:9094,BROKER://0.0.0.0:9093
2025-03-11 15:15:57 log.cleaner.backoff.ms = 15000
2025-03-11 15:15:57 log.cleaner.dedupe.buffer.size = 134217728
2025-03-11 15:15:57 log.cleaner.delete.retention.ms = 86400000
2025-03-11 15:15:57 log.cleaner.enable = true
2025-03-11 15:15:57 log.cleaner.io.buffer.load.factor = 0.9
2025-03-11 15:15:57 log.cleaner.io.buffer.size = 524288
2025-03-11 15:15:57 log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
2025-03-11 15:15:57 log.cleaner.max.compaction.lag.ms = 9223372036854775807
2025-03-11 15:15:57 log.cleaner.min.cleanable.ratio = 0.5
2025-03-11 15:15:57 log.cleaner.min.compaction.lag.ms = 0
2025-03-11 15:15:57 log.cleaner.threads = 1
2025-03-11 15:15:57 log.cleanup.policy = [delete]
2025-03-11 15:15:57 log.dir = /tmp/kafka-logs
2025-03-11 15:15:57 log.dir.failure.timeout.ms = 30000
2025-03-11 15:15:57 log.dirs = /var/lib/kafka/data
2025-03-11 15:15:57 log.flush.interval.messages = 9223372036854775807
2025-03-11 15:15:57 log.flush.interval.ms = null
2025-03-11 15:15:57 log.flush.offset.checkpoint.interval.ms = 60000
2025-03-11 15:15:57 log.flush.scheduler.interval.ms = 9223372036854775807
2025-03-11 15:15:57 log.flush.start.offset.checkpoint.interval.ms = 60000
2025-03-11 15:15:57 log.index.interval.bytes = 4096
2025-03-11 15:15:57 log.index.size.max.bytes = 10485760
2025-03-11 15:15:57 log.initial.task.delay.ms = 30000
2025-03-11 15:15:57 log.local.retention.bytes = -2
2025-03-11 15:15:57 log.local.retention.ms = -2
2025-03-11 15:15:57 log.message.downconversion.enable = true
2025-03-11 15:15:57 log.message.format.version = 3.0-IV1
2025-03-11 15:15:57 log.message.timestamp.after.max.ms = 9223372036854775807
2025-03-11 15:15:57 log.message.timestamp.before.max.ms = 9223372036854775807
2025-03-11 15:15:57 log.message.timestamp.difference.max.ms = 9223372036854775807
2025-03-11 15:15:57 log.message.timestamp.type = CreateTime
2025-03-11 15:15:57 log.preallocate = false
2025-03-11 15:15:57 log.retention.bytes = -1
2025-03-11 15:15:57 log.retention.check.interval.ms = 300000
2025-03-11 15:15:57 log.retention.hours = 168
2025-03-11 15:15:57 log.retention.minutes = null
2025-03-11 15:15:57 log.retention.ms = null
2025-03-11 15:15:57 log.roll.hours = 168
2025-03-11 15:15:57 log.roll.jitter.hours = 0
2025-03-11 15:15:57 log.roll.jitter.ms = null
2025-03-11 15:15:57 log.roll.ms = null
2025-03-11 15:15:57 log.segment.bytes = 1073741824
2025-03-11 15:15:57 log.segment.delete.delay.ms = 60000
2025-03-11 15:15:57 max.connection.creation.rate = 2147483647
2025-03-11 15:15:57 max.connections = 2147483647
2025-03-11 15:15:57 max.connections.per.ip = 2147483647
2025-03-11 15:15:57 max.connections.per.ip.overrides =
2025-03-11 15:15:57 max.incremental.fetch.session.cache.slots = 1000
2025-03-11 15:15:57 max.request.partition.size.limit = 2000
2025-03-11 15:15:57 message.max.bytes = 1048588
2025-03-11 15:15:57 metadata.log.dir = null
2025-03-11 15:15:57 metadata.log.max.record.bytes.between.snapshots = 20971520
2025-03-11 15:15:57 metadata.log.max.snapshot.interval.ms = 3600000
2025-03-11 15:15:57 metadata.log.segment.bytes = 1073741824
2025-03-11 15:15:57 metadata.log.segment.min.bytes = 8388608
2025-03-11 15:15:57 metadata.log.segment.ms = 604800000
2025-03-11 15:15:57 metadata.max.idle.interval.ms = 500
2025-03-11 15:15:57 metadata.max.retention.bytes = 104857600
2025-03-11 15:15:57 metadata.max.retention.ms = 604800000
2025-03-11 15:15:57 metric.reporters = []
2025-03-11 15:15:57 metrics.num.samples = 2
2025-03-11 15:15:57 metrics.recording.level = INFO
2025-03-11 15:15:57 metrics.sample.window.ms = 30000
2025-03-11 15:15:57 min.insync.replicas = 1
2025-03-11 15:15:57 node.id = 1
2025-03-11 15:15:57 num.io.threads = 8
2025-03-11 15:15:57 num.network.threads = 3
2025-03-11 15:15:57 num.partitions = 1
2025-03-11 15:15:57 num.recovery.threads.per.data.dir = 1
2025-03-11 15:15:57 num.replica.alter.log.dirs.threads = null
2025-03-11 15:15:57 num.replica.fetchers = 1
2025-03-11 15:15:57 offset.metadata.max.bytes = 4096
2025-03-11 15:15:57 offsets.commit.required.acks = -1
2025-03-11 15:15:57 offsets.commit.timeout.ms = 5000
2025-03-11 15:15:57 offsets.load.buffer.size = 5242880
2025-03-11 15:15:57 offsets.retention.check.interval.ms = 600000
2025-03-11 15:15:57 offsets.retention.minutes = 10080
2025-03-11 15:15:57 offsets.topic.compression.codec = 0
2025-03-11 15:15:57 offsets.topic.num.partitions = 1
2025-03-11 15:15:57 offsets.topic.replication.factor = 1
2025-03-11 15:15:57 offsets.topic.segment.bytes = 104857600
2025-03-11 15:15:57 password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding
2025-03-11 15:15:57 password.encoder.iterations = 4096
2025-03-11 15:15:57 password.encoder.key.length = 128
2025-03-11 15:15:57 password.encoder.keyfactory.algorithm = null
2025-03-11 15:15:57 password.encoder.old.secret = null
2025-03-11 15:15:57 password.encoder.secret = null
2025-03-11 15:15:57 principal.builder.class = class org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder
2025-03-11 15:15:57 process.roles = [broker, controller]
2025-03-11 15:15:57 producer.id.expiration.check.interval.ms = 600000
2025-03-11 15:15:57 producer.id.expiration.ms = 86400000
2025-03-11 15:15:57 producer.purgatory.purge.interval.requests = 1000
2025-03-11 15:15:57 queued.max.request.bytes = -1
2025-03-11 15:15:57 queued.max.requests = 500
2025-03-11 15:15:57 quota.window.num = 11
2025-03-11 15:15:57 quota.window.size.seconds = 1
2025-03-11 15:15:57 remote.fetch.max.wait.ms = 500
2025-03-11 15:15:57 remote.log.index.file.cache.total.size.bytes = 1073741824
2025-03-11 15:15:57 remote.log.manager.copier.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
2025-03-11 15:15:57 remote.log.manager.copy.quota.window.num = 11
2025-03-11 15:15:57 remote.log.manager.copy.quota.window.size.seconds = 1
2025-03-11 15:15:57 remote.log.manager.expiration.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
2025-03-11 15:15:57 remote.log.manager.fetch.quota.window.num = 11
2025-03-11 15:15:57 remote.log.manager.fetch.quota.window.size.seconds = 1
2025-03-11 15:15:57 remote.log.manager.task.interval.ms = 30000
2025-03-11 15:15:57 remote.log.manager.task.retry.backoff.max.ms = 30000
2025-03-11 15:15:57 remote.log.manager.task.retry.backoff.ms = 500
2025-03-11 15:15:57 remote.log.manager.task.retry.jitter = 0.2
2025-03-11 15:15:57 remote.log.manager.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.metadata.custom.metadata.max.bytes = 128
2025-03-11 15:15:57 remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
2025-03-11 15:15:57 remote.log.metadata.manager.class.path = null
2025-03-11 15:15:57 remote.log.metadata.manager.impl.prefix = rlmm.config.
2025-03-11 15:15:57 remote.log.metadata.manager.listener.name = null
2025-03-11 15:15:57 remote.log.reader.max.pending.tasks = 100
2025-03-11 15:15:57 remote.log.reader.threads = 10
2025-03-11 15:15:57 remote.log.storage.manager.class.name = null
2025-03-11 15:15:57 remote.log.storage.manager.class.path = null
2025-03-11 15:15:57 remote.log.storage.manager.impl.prefix = rsm.config.
2025-03-11 15:15:57 remote.log.storage.system.enable = false
2025-03-11 15:15:57 replica.fetch.backoff.ms = 1000
2025-03-11 15:15:57 replica.fetch.max.bytes = 1048576
2025-03-11 15:15:57 replica.fetch.min.bytes = 1
2025-03-11 15:15:57 replica.fetch.response.max.bytes = 10485760
2025-03-11 15:15:57 replica.fetch.wait.max.ms = 500
2025-03-11 15:15:57 replica.high.watermark.checkpoint.interval.ms = 5000
2025-03-11 15:15:57 replica.lag.time.max.ms = 30000
2025-03-11 15:15:57 replica.selector.class = null
2025-03-11 15:15:57 replica.socket.receive.buffer.bytes = 65536
2025-03-11 15:15:57 replica.socket.timeout.ms = 30000
2025-03-11 15:15:57 replication.quota.window.num = 11
2025-03-11 15:15:57 replication.quota.window.size.seconds = 1
2025-03-11 15:15:57 request.timeout.ms = 30000
2025-03-11 15:15:57 reserved.broker.max.id = 1000
2025-03-11 15:15:57 sasl.client.callback.handler.class = null
2025-03-11 15:15:57 sasl.enabled.mechanisms = [GSSAPI]
2025-03-11 15:15:57 sasl.jaas.config = null
2025-03-11 15:15:57 sasl.kerberos.kinit.cmd = /usr/bin/kinit
2025-03-11 15:15:57 sasl.kerberos.min.time.before.relogin = 60000
2025-03-11 15:15:57 sasl.kerberos.principal.to.local.rules = [DEFAULT]
2025-03-11 15:15:57 sasl.kerberos.service.name = null
2025-03-11 15:15:57 sasl.kerberos.ticket.renew.jitter = 0.05
2025-03-11 15:15:57 sasl.kerberos.ticket.renew.window.factor = 0.8
2025-03-11 15:15:57 sasl.login.callback.handler.class = null
2025-03-11 15:15:57 sasl.login.class = null
2025-03-11 15:15:57 sasl.login.connect.timeout.ms = null
2025-03-11 15:15:57 sasl.login.read.timeout.ms = null
2025-03-11 15:15:57 sasl.login.refresh.buffer.seconds = 300
2025-03-11 15:15:57 sasl.login.refresh.min.period.seconds = 60
2025-03-11 15:15:57 sasl.login.refresh.window.factor = 0.8
2025-03-11 15:15:57 sasl.login.refresh.window.jitter = 0.05
2025-03-11 15:15:57 sasl.login.retry.backoff.max.ms = 10000
2025-03-11 15:15:57 sasl.login.retry.backoff.ms = 100
2025-03-11 15:15:57 sasl.mechanism.controller.protocol = GSSAPI
2025-03-11 15:15:57 sasl.mechanism.inter.broker.protocol = GSSAPI
2025-03-11 15:15:57 sasl.oauthbearer.clock.skew.seconds = 30
2025-03-11 15:15:57 sasl.oauthbearer.expected.audience = null
2025-03-11 15:15:57 sasl.oauthbearer.expected.issuer = null
2025-03-11 15:15:57 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
2025-03-11 15:15:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
2025-03-11 15:15:57 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
2025-03-11 15:15:57 sasl.oauthbearer.jwks.endpoint.url = null
2025-03-11 15:15:57 sasl.oauthbearer.scope.claim.name = scope
2025-03-11 15:15:57 sasl.oauthbearer.sub.claim.name = sub
2025-03-11 15:15:57 sasl.oauthbearer.token.endpoint.url = null
2025-03-11 15:15:57 sasl.server.callback.handler.class = null
2025-03-11 15:15:57 sasl.server.max.receive.size = 524288
2025-03-11 15:15:57 security.inter.broker.protocol = PLAINTEXT
2025-03-11 15:15:57 security.providers = null
2025-03-11 15:15:57 server.max.startup.time.ms = 9223372036854775807
2025-03-11 15:15:57 socket.connection.setup.timeout.max.ms = 30000
2025-03-11 15:15:57 socket.connection.setup.timeout.ms = 10000
2025-03-11 15:15:57 socket.listen.backlog.size = 50
2025-03-11 15:15:57 socket.receive.buffer.bytes = 102400
2025-03-11 15:15:57 socket.request.max.bytes = 104857600
2025-03-11 15:15:57 socket.send.buffer.bytes = 102400
2025-03-11 15:15:57 ssl.allow.dn.changes = false
2025-03-11 15:15:57 ssl.allow.san.changes = false
2025-03-11 15:15:57 ssl.cipher.suites = []
2025-03-11 15:15:57 ssl.client.auth = none
2025-03-11 15:15:57 ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
2025-03-11 15:15:57 ssl.endpoint.identification.algorithm = https
2025-03-11 15:15:57 ssl.engine.factory.class = null
2025-03-11 15:15:57 ssl.key.password = null
2025-03-11 15:15:57 ssl.keymanager.algorithm = SunX509
2025-03-11 15:15:57 ssl.keystore.certificate.chain = null
2025-03-11 15:15:57 ssl.keystore.key = null
2025-03-11 15:15:57 ssl.keystore.location = null
2025-03-11 15:15:57 ssl.keystore.password = null
2025-03-11 15:15:57 ssl.keystore.type = JKS
2025-03-11 15:15:57 ssl.principal.mapping.rules = DEFAULT
2025-03-11 15:15:57 ssl.protocol = TLSv1.3
2025-03-11 15:15:57 ssl.provider = null
2025-03-11 15:15:57 ssl.secure.random.implementation = null
2025-03-11 15:15:57 ssl.trustmanager.algorithm = PKIX
2025-03-11 15:15:57 ssl.truststore.certificates = null
2025-03-11 15:15:57 ssl.truststore.location = null
2025-03-11 15:15:57 ssl.truststore.password = null
2025-03-11 15:15:57 ssl.truststore.type = JKS
2025-03-11 15:15:57 telemetry.max.bytes = 1048576
2025-03-11 15:15:57 transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000
2025-03-11 15:15:57 transaction.max.timeout.ms = 900000
2025-03-11 15:15:57 transaction.partition.verification.enable = true
2025-03-11 15:15:57 transaction.remove.expired.transaction.cleanup.interval.ms = 3600000
2025-03-11 15:15:57 transaction.state.log.load.buffer.size = 5242880
2025-03-11 15:15:57 transaction.state.log.min.isr = 1
2025-03-11 15:15:57 transaction.state.log.num.partitions = 50
2025-03-11 15:15:57 transaction.state.log.replication.factor = 1
2025-03-11 15:15:57 transaction.state.log.segment.bytes = 104857600
2025-03-11 15:15:57 transactional.id.expiration.ms = 604800000
2025-03-11 15:15:57 unclean.leader.election.enable = false
2025-03-11 15:15:57 unstable.api.versions.enable = false
2025-03-11 15:15:57 unstable.feature.versions.enable = false
2025-03-11 15:15:57 zookeeper.clientCnxnSocket = null
2025-03-11 15:15:57 zookeeper.connect =
2025-03-11 15:15:57 zookeeper.connection.timeout.ms = null
2025-03-11 15:15:57 zookeeper.max.in.flight.requests = 10
2025-03-11 15:15:57 zookeeper.metadata.migration.enable = false
2025-03-11 15:15:57 zookeeper.metadata.migration.min.batch.size = 200
2025-03-11 15:15:57 zookeeper.session.timeout.ms = 18000
2025-03-11 15:15:57 zookeeper.set.acl = false
2025-03-11 15:15:57 zookeeper.ssl.cipher.suites = null
2025-03-11 15:15:57 zookeeper.ssl.client.enable = false
2025-03-11 15:15:57 zookeeper.ssl.crl.enable = false
2025-03-11 15:15:57 zookeeper.ssl.enabled.protocols = null
2025-03-11 15:15:57 zookeeper.ssl.endpoint.identification.algorithm = HTTPS
2025-03-11 15:15:57 zookeeper.ssl.keystore.location = null
2025-03-11 15:15:57 zookeeper.ssl.keystore.password = null
2025-03-11 15:15:57 zookeeper.ssl.keystore.type = null
2025-03-11 15:15:57 zookeeper.ssl.ocsp.enable = false
2025-03-11 15:15:57 zookeeper.ssl.protocol = TLSv1.2
2025-03-11 15:15:57 zookeeper.ssl.truststore.location = null
2025-03-11 15:15:57 zookeeper.ssl.truststore.password = null
2025-03-11 15:15:57 zookeeper.ssl.truststore.type = null
2025-03-11 15:15:57 (kafka.server.KafkaConfig)
2025-03-11 15:15:57 [2025-03-11 14:15:57,877] INFO RemoteLogManagerConfig values:
2025-03-11 15:15:57 log.local.retention.bytes = -2
2025-03-11 15:15:57 log.local.retention.ms = -2
2025-03-11 15:15:57 remote.fetch.max.wait.ms = 500
2025-03-11 15:15:57 remote.log.index.file.cache.total.size.bytes = 1073741824
2025-03-11 15:15:57 remote.log.manager.copier.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.manager.copy.max.bytes.per.second = 9223372036854775807
2025-03-11 15:15:57 remote.log.manager.copy.quota.window.num = 11
2025-03-11 15:15:57 remote.log.manager.copy.quota.window.size.seconds = 1
2025-03-11 15:15:57 remote.log.manager.expiration.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.manager.fetch.max.bytes.per.second = 9223372036854775807
2025-03-11 15:15:57 remote.log.manager.fetch.quota.window.num = 11
2025-03-11 15:15:57 remote.log.manager.fetch.quota.window.size.seconds = 1
2025-03-11 15:15:57 remote.log.manager.task.interval.ms = 30000
2025-03-11 15:15:57 remote.log.manager.task.retry.backoff.max.ms = 30000
2025-03-11 15:15:57 remote.log.manager.task.retry.backoff.ms = 500
2025-03-11 15:15:57 remote.log.manager.task.retry.jitter = 0.2
2025-03-11 15:15:57 remote.log.manager.thread.pool.size = 10
2025-03-11 15:15:57 remote.log.metadata.custom.metadata.max.bytes = 128
2025-03-11 15:15:57 remote.log.metadata.manager.class.name = org.apache.kafka.server.log.remote.metadata.storage.TopicBasedRemoteLogMetadataManager
2025-03-11 15:15:57 remote.log.metadata.manager.class.path = null
2025-03-11 15:15:57 remote.log.metadata.manager.impl.prefix = rlmm.config.
2025-03-11 15:15:57 remote.log.metadata.manager.listener.name = null
2025-03-11 15:15:57 remote.log.reader.max.pending.tasks = 100
2025-03-11 15:15:57 remote.log.reader.threads = 10
2025-03-11 15:15:57 remote.log.storage.manager.class.name = null
2025-03-11 15:15:57 remote.log.storage.manager.class.path = null
2025-03-11 15:15:57 remote.log.storage.manager.impl.prefix = rsm.config.
2025-03-11 15:15:57 remote.log.storage.system.enable = false
cp-schema-registry container:
2025-03-11 15:15:58 ===> User
2025-03-11 15:15:58 uid=1000(appuser) gid=1000(appuser) groups=1000(appuser)
2025-03-11 15:15:58 ===> Configuring ...
2025-03-11 15:16:01 ===> Running preflight checks ...
2025-03-11 15:16:01 ===> Check if Kafka is healthy ...
2025-03-11 15:16:02 [2025-03-11 14:16:02,672] INFO AdminClientConfig values:
2025-03-11 15:16:02 auto.include.jmx.reporter = true
2025-03-11 15:16:02 bootstrap.controllers = []
2025-03-11 15:16:02 bootstrap.servers = [PLAINTEXT://tc-crWvFfEX:9092]
2025-03-11 15:16:02 client.dns.lookup = use_all_dns_ips
2025-03-11 15:16:02 client.id =
2025-03-11 15:16:02 connections.max.idle.ms = 300000
2025-03-11 15:16:02 default.api.timeout.ms = 60000
2025-03-11 15:16:02 enable.metrics.push = true
2025-03-11 15:16:02 metadata.max.age.ms = 300000
2025-03-11 15:16:02 metadata.recovery.strategy = none
2025-03-11 15:16:02 metric.reporters = []
2025-03-11 15:16:02 metrics.num.samples = 2
2025-03-11 15:16:02 metrics.recording.level = INFO
2025-03-11 15:16:02 metrics.sample.window.ms = 30000
2025-03-11 15:16:02 receive.buffer.bytes = 65536
2025-03-11 15:16:02 reconnect.backoff.max.ms = 1000
2025-03-11 15:16:02 reconnect.backoff.ms = 50
2025-03-11 15:16:02 request.timeout.ms = 30000
2025-03-11 15:16:02 retries = 2147483647
2025-03-11 15:16:02 retry.backoff.max.ms = 1000
2025-03-11 15:16:02 retry.backoff.ms = 100
2025-03-11 15:16:02 sasl.client.callback.handler.class = null
2025-03-11 15:16:02 sasl.jaas.config = null
2025-03-11 15:16:02 sasl.kerberos.kinit.cmd = /usr/bin/kinit
2025-03-11 15:16:02 sasl.kerberos.min.time.before.relogin = 60000
2025-03-11 15:16:02 sasl.kerberos.service.name = null
2025-03-11 15:16:02 sasl.kerberos.ticket.renew.jitter = 0.05
2025-03-11 15:16:02 sasl.kerberos.ticket.renew.window.factor = 0.8
2025-03-11 15:16:02 sasl.login.callback.handler.class = null
2025-03-11 15:16:02 sasl.login.class = null
2025-03-11 15:16:02 sasl.login.connect.timeout.ms = null
2025-03-11 15:16:02 sasl.login.read.timeout.ms = null
2025-03-11 15:16:02 sasl.login.refresh.buffer.seconds = 300
2025-03-11 15:16:02 sasl.login.refresh.min.period.seconds = 60
2025-03-11 15:16:02 sasl.login.refresh.window.factor = 0.8
2025-03-11 15:16:02 sasl.login.refresh.window.jitter = 0.05
2025-03-11 15:16:02 sasl.login.retry.backoff.max.ms = 10000
2025-03-11 15:16:02 sasl.login.retry.backoff.ms = 100
2025-03-11 15:16:02 sasl.mechanism = GSSAPI
2025-03-11 15:16:02 sasl.oauthbearer.clock.skew.seconds = 30
2025-03-11 15:16:02 sasl.oauthbearer.expected.audience = null
2025-03-11 15:16:02 sasl.oauthbearer.expected.issuer = null
2025-03-11 15:16:02 sasl.oauthbearer.jwks.endpoint.refresh.ms = 3600000
2025-03-11 15:16:02 sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms = 10000
2025-03-11 15:16:02 sasl.oauthbearer.jwks.endpoint.retry.backoff.ms = 100
2025-03-11 15:16:02 sasl.oauthbearer.jwks.endpoint.url = null
2025-03-11 15:16:02 sasl.oauthbearer.scope.claim.name = scope
2025-03-11 15:16:02 sasl.oauthbearer.sub.claim.name = sub
2025-03-11 15:16:02 sasl.oauthbearer.token.endpoint.url = null
2025-03-11 15:16:02 security.protocol = PLAINTEXT
2025-03-11 15:16:02 security.providers = null
2025-03-11 15:16:02 send.buffer.bytes = 131072
2025-03-11 15:16:02 socket.connection.setup.timeout.max.ms = 30000
2025-03-11 15:16:02 socket.connection.setup.timeout.ms = 10000
2025-03-11 15:16:02 ssl.cipher.suites = null
2025-03-11 15:16:02 ssl.enabled.protocols = [TLSv1.2, TLSv1.3]
2025-03-11 15:16:02 ssl.endpoint.identification.algorithm = https
2025-03-11 15:16:02 ssl.engine.factory.class = null
2025-03-11 15:16:02 ssl.key.password = null
2025-03-11 15:16:02 ssl.keymanager.algorithm = SunX509
2025-03-11 15:16:02 ssl.keystore.certificate.chain = null
2025-03-11 15:16:02 ssl.keystore.key = null
2025-03-11 15:16:02 ssl.keystore.location = null
2025-03-11 15:16:02 ssl.keystore.password = null
2025-03-11 15:16:02 ssl.keystore.type = JKS
2025-03-11 15:16:02 ssl.protocol = TLSv1.3
2025-03-11 15:16:02 ssl.provider = null
2025-03-11 15:16:02 ssl.secure.random.implementation = null
2025-03-11 15:16:02 ssl.trustmanager.algorithm = PKIX
2025-03-11 15:16:02 ssl.truststore.certificates = null
2025-03-11 15:16:02 ssl.truststore.location = null
2025-03-11 15:16:02 ssl.truststore.password = null
2025-03-11 15:16:02 ssl.truststore.type = JKS
2025-03-11 15:16:02 (org.apache.kafka.clients.admin.AdminClientConfig)
2025-03-11 15:16:02 [2025-03-11 14:16:02,908] INFO Kafka version: 7.8.0-ccs (org.apache.kafka.common.utils.AppInfoParser)
2025-03-11 15:16:02 [2025-03-11 14:16:02,908] INFO Kafka commitId: cc7168da1fddfcfd (org.apache.kafka.common.utils.AppInfoParser)
2025-03-11 15:16:02 [2025-03-11 14:16:02,908] INFO Kafka startTimeMs: 1741702562904 (org.apache.kafka.common.utils.AppInfoParser)
2025-03-11 15:16:03 [2025-03-11 14:16:03,402] INFO [AdminClient clientId=adminclient-1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
2025-03-11 15:16:03 [2025-03-11 14:16:03,403] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:51721) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient)
2025-03-11 15:16:03 [2025-03-11 14:16:03,505] INFO [AdminClient clientId=adminclient-1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
[...]
2025-03-11 15:16:42 [2025-03-11 14:16:42,853] INFO [AdminClient clientId=adminclient-1] Node 1 disconnected. (org.apache.kafka.clients.NetworkClient)
2025-03-11 15:16:42 [2025-03-11 14:16:42,853] WARN [AdminClient clientId=adminclient-1] Connection to node 1 (localhost/127.0.0.1:51721) could not be established. Node may not be available. (org.apache.kafka.clients.NetworkClient)
2025-03-11 15:16:42 [2025-03-11 14:16:42,914] ERROR Error while getting broker list. (io.confluent.admin.utils.ClusterStatus)
2025-03-11 15:16:42 java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2025-03-11 15:16:42 at java.base/java.util.concurrent.CompletableFuture.reportGet(Unknown Source)
2025-03-11 15:16:42 at java.base/java.util.concurrent.CompletableFuture.get(Unknown Source)
2025-03-11 15:16:42 at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:165)
2025-03-11 15:16:42 at io.confluent.admin.utils.ClusterStatus.isKafkaReady(ClusterStatus.java:147)
2025-03-11 15:16:42 at io.confluent.admin.utils.cli.KafkaReadyCommand.main(KafkaReadyCommand.java:149)
2025-03-11 15:16:42 Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting for a node assignment. Call: listNodes
2025-03-11 15:16:43 [2025-03-11 14:16:43,915] INFO Expected 1 brokers but found only 0. Trying to query Kafka for metadata again ... (io.confluent.admin.utils.ClusterStatus)
2025-03-11 15:16:43 [2025-03-11 14:16:43,915] ERROR Expected 1 brokers but found only 0. Brokers found []. (io.confluent.admin.utils.ClusterStatus)
2025-03-11 15:16:43 Using log4j config /etc/schema-registry/log4j.propertiesAdditional Information
No response