Replies: 2 comments
-
I used a custom image, quoting the blog: https://strimzi.io/blog/2020/01/27/deploying-debezium-with-kafkaconnector-resource/ |
Beta Was this translation helpful? Give feedback.
0 replies
-
The construction of this image is based on practice. This construction process is migrated from our original cp-confluent-connect image construction process! There are currently no logs to analyze register plugin further failure reasons. Are there any suggestions or solutions? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Custom Docker File:
FROM quay.io/strimzi/kafka:0.39.0-kafka-3.6.1
USER root:root
RUN microdnf install -y libaio unzip curl maven
RUN mkdir -p /opt/kafka/plugins/debezium
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/2.5.0.Final/debezium-connector-mysql-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
#------------------------------------------------------
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-oracle/2.5.0.Final/debezium-connector-oracle-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
RUN curl -L https://repo1.maven.org/maven2/com/oracle/database/jdbc/ojdbc8/19.8.0.0/ojdbc8-19.8.0.0.jar -o /opt/kafka/plugins/debezium/debezium-connector-oracle/ojdbc8-19.8.0.0.jar
RUN curl -L https://download.oracle.com/otn_software/linux/instantclient/199000/instantclient-basic-linux.x64-19.9.0.0.0dbru.zip -o instantclient.zip
RUN unzip instantclient.zip -d /opt/oracle
&& echo /opt/oracle/instantclient_19_9 > /etc/ld.so.conf.d/oracle-instantclient.conf
&& ldconfig
&& rm -f instantclient.zip
RUN rm -rf /tmp/* && microdnf clean all
ENV LD_LIBRARY_PATH=/opt/oracle/instantclient_19_9:$LD_LIBRARY_PATH
#---------------------------------------------------
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-sqlserver/2.5.0.Final/debezium-connector-sqlserver-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-postgres/2.5.0.Final/debezium-connector-postgres-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-mysql/2.5.0.Final/debezium-connector-mysql-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
RUN curl -L https://repo1.maven.org/maven2/io/debezium/debezium-connector-mongodb/2.5.0.Final/debezium-connector-mongodb-2.5.0.Final-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/debezium/
RUN curl -L https://github.com/castorm/kafka-connect-http/releases/download/v0.8.11/kafka-connect-http-0.8.11-plugin.tar.gz | tar xvz -C /opt/kafka/plugins/
RUN cd /tmp/maven
&& mvn dependency:copy-dependencies -DoutputDirectory=/opt/kafka/plugins/confluent-avro-converter
RUN rm -rf /tmp/maven && microdnf clean all
USER 1001
Kafka Connect Crd
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: kafka-stream-connect
namespace: kafka-cluster
spec:
version: 3.6.1
replicas: 3
image: gaozuogg/kafka-connect-cluster:0.4
bootstrapServers: kafka-stream-bk-kafka-bootstrap:9092
config:
group.id: kafka-stream-connect-cluster
offset.storage.topic: kafka-stream-connect-cluster-offsets
config.storage.topic: kafka-stream-connect-cluster-configs
status.storage.topic: kafka-stream-connect-cluster-status
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
connect plugin configuration
{
"name": "HR_DB_CDC",
"connector.class": "io.debezium.connector.oracle.OracleConnector",
"key.converter": "io.confluent.connect.avro.AvroConverter",
"value.converter": "io.confluent.connect.avro.AvroConverter",
"topic.prefix": "HR",
"database.hostname": "",
"database.port": "1521",
"database.user": "dbzuser",
"database.password": "",
"database.dbname": "ORCL",
"log.mining.strategy": "online_catalog",
"decimal.handling.mode": "string",
"schema.history.internal.store.only.captured.tables.ddl": "true",
"schema.history.internal.store.only.captured.databases.ddl": "true",
"table.include.list": "....",
"schema.history.internal.kafka.bootstrap.servers": "kafka-stream-bk-kafka-bootstrap:9092",
"value.converter.schema.registry.url": "http://kafka-stream-registry-cp-schema-registry:8081",
"schema.history.internal.kafka.topic": "schema-changes.inventory",
"key.converter.schema.registry.url": "http://kafka-stream-registry-cp-schema-registry:8081"
}
connect logs
2024-01-18 08:02:37,474 INFO 10.42.0.58 - - [18/Jan/2024:08:02:37 +0000] "GET /connectors?expand=info&expand=status HTTP/1.1" 200 4038 "-" "Redpanda Console" 6 (org.apache.kafka.connect.runtime.rest.RestServer) [qtp1839644942-52]
2024-01-18 08:02:37,476 INFO 10.42.0.58 - - [18/Jan/2024:08:02:37 +0000] "GET / HTTP/1.1" 200 91 "-" "Redpanda Console" 1 (org.apache.kafka.connect.runtime.rest.RestServer) [qtp1839644942-52]
2024-01-18 08:02:37,544 INFO 10.42.0.58 - - [18/Jan/2024:08:02:37 +0000] "GET / HTTP/1.1" 200 91 "-" "Redpanda Console" 2 (org.apache.kafka.connect.runtime.rest.RestServer) [qtp1839644942-36]
2024-01-18 08:02:37,545 INFO 10.42.0.58 - - [18/Jan/2024:08:02:37 +0000] "GET /connector-plugins HTTP/1.1" 200 911 "-" "Redpanda Console" 1 (org.apache.kafka.connect.runtime.rest.RestServer) [qtp1839644942-36]
2024-01-18 08:02:37,948 INFO SourceConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.oracle.OracleConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
exactly.once.support = requested
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
key.converter = class io.confluent.connect.avro.AvroConverter
name = HR_DB_CDC
offsets.storage.topic = null
predicates = []
tasks.max = 1
topic.creation.groups = []
transaction.boundary = poll
transaction.boundary.interval.ms = null
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.SourceConnectorConfig) [DistributedHerder-connect-1-1]
2024-01-18 08:02:37,948 INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.oracle.OracleConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
exactly.once.support = requested
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
key.converter = class io.confluent.connect.avro.AvroConverter
name = HR_DB_CDC
offsets.storage.topic = null
predicates = []
tasks.max = 1
topic.creation.groups = []
transaction.boundary = poll
transaction.boundary.interval.ms = null
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [DistributedHerder-connect-1-1]
2024-01-18 08:02:37,954 ERROR Failed to write task configurations to Kafka (org.apache.kafka.connect.storage.KafkaConfigBackingStore) [DistributedHerder-connect-1-1]
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:97)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:79)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
at org.apache.kafka.connect.storage.KafkaConfigBackingStore.sendPrivileged(KafkaConfigBackingStore.java:805)
at org.apache.kafka.connect.storage.KafkaConfigBackingStore.putTaskConfigs(KafkaConfigBackingStore.java:594)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$writeTaskConfigs$46(DistributedHerder.java:2139)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.writeToConfigTopicAsLeader(DistributedHerder.java:1625)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.writeTaskConfigs(DistributedHerder.java:2139)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.publishConnectorTaskConfigs(DistributedHerder.java:2095)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:2082)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithExponentialBackoffRetries(DistributedHerder.java:2025)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$null$42(DistributedHerder.java:2038)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.runRequest(DistributedHerder.java:2232)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:470)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:371)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
2024-01-18 08:02:37,955 ERROR [Worker clientId=connect-1, groupId=kafka-stream-connect-cluster] Failed to reconfigure connector's tasks (HR_DB_CDC), retrying after backoff. (org.apache.kafka.connect.runtime.distributed.DistributedHerder) [DistributedHerder-connect-1-1]
org.apache.kafka.connect.errors.ConnectException: Error writing task configurations to Kafka
at org.apache.kafka.connect.storage.KafkaConfigBackingStore.putTaskConfigs(KafkaConfigBackingStore.java:597)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$writeTaskConfigs$46(DistributedHerder.java:2139)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.writeToConfigTopicAsLeader(DistributedHerder.java:1625)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.writeTaskConfigs(DistributedHerder.java:2139)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.publishConnectorTaskConfigs(DistributedHerder.java:2095)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnector(DistributedHerder.java:2082)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.reconfigureConnectorTasksWithExponentialBackoffRetries(DistributedHerder.java:2025)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.lambda$null$42(DistributedHerder.java:2038)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.runRequest(DistributedHerder.java:2232)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.tick(DistributedHerder.java:470)
at org.apache.kafka.connect.runtime.distributed.DistributedHerder.run(DistributedHerder.java:371)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:539)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.valueOrError(FutureRecordMetadata.java:97)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:79)
at org.apache.kafka.clients.producer.internals.FutureRecordMetadata.get(FutureRecordMetadata.java:30)
at org.apache.kafka.connect.storage.KafkaConfigBackingStore.sendPrivileged(KafkaConfigBackingStore.java:805)
at org.apache.kafka.connect.storage.KafkaConfigBackingStore.putTaskConfigs(KafkaConfigBackingStore.java:594)
... 15 more
Caused by: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
2024-01-18 08:02:38,956 INFO SourceConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.oracle.OracleConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
exactly.once.support = requested
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
key.converter = class io.confluent.connect.avro.AvroConverter
name = HR_DB_CDC
offsets.storage.topic = null
predicates = []
tasks.max = 1
topic.creation.groups = []
transaction.boundary = poll
transaction.boundary.interval.ms = null
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.SourceConnectorConfig) [DistributedHerder-connect-1-1]
2024-01-18 08:02:38,956 INFO EnrichedConnectorConfig values:
config.action.reload = restart
connector.class = io.debezium.connector.oracle.OracleConnector
errors.log.enable = false
errors.log.include.messages = false
errors.retry.delay.max.ms = 60000
errors.retry.timeout = 0
errors.tolerance = none
exactly.once.support = requested
header.converter = class org.apache.kafka.connect.storage.SimpleHeaderConverter
key.converter = class io.confluent.connect.avro.AvroConverter
name = HR_DB_CDC
offsets.storage.topic = null
predicates = []
tasks.max = 1
topic.creation.groups = []
transaction.boundary = poll
transaction.boundary.interval.ms = null
transforms = []
value.converter = class io.confluent.connect.avro.AvroConverter
(org.apache.kafka.connect.runtime.ConnectorConfig$EnrichedConnectorConfig) [DistributedHerder-connect-1-1]
2024-01-18 08:02:38,963 ERROR Failed to write task configurations to Kafka (org.apache.kafka.connect.storage.KafkaConfigBackingStore) [DistributedHerder-connect-1-1]
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.UnknownServerException: The server experienced an unexpected error when processing the request.
...............
kafka logs
024-01-18 08:01:13,206 INFO [Broker id=1] Stopped fetchers as part of LeaderAndIsr request correlationId 10 from controller 2 epoch 1 as part of the become-leader transition for 1 partitions (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,272 INFO [ReplicaFetcher replicaId=1, leaderId=2, fetcherId=0] Truncating partition kafka-stream-connect-cluster-status-2 with TruncationState(offset=0, completed=true) due to local high watermark 0 (kafka.server.ReplicaFetcherThread) [ReplicaFetcherThread-0-2]
2024-01-18 08:01:13,273 INFO [UnifiedLog partition=kafka-stream-connect-cluster-status-2, dir=/var/lib/kafka/data-0/kafka-log1] Truncating to 0 has no effect as the largest offset in the log is -1 (kafka.log.UnifiedLog) [ReplicaFetcherThread-0-2]
2024-01-18 08:01:13,283 INFO [LogLoader partition=kafka-stream-connect-cluster-configs-0, dir=/var/lib/kafka/data-0/kafka-log1] Loading producer state till offset 0 with message format version 2 (kafka.log.UnifiedLog$) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,284 INFO Created log for partition kafka-stream-connect-cluster-configs-0 in /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0 with properties {cleanup.policy=compact} (kafka.log.LogManager) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,284 INFO [Partition kafka-stream-connect-cluster-configs-0 broker=1] No checkpointed highwatermark is found for partition kafka-stream-connect-cluster-configs-0 (kafka.cluster.Partition) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,284 INFO [Partition kafka-stream-connect-cluster-configs-0 broker=1] Log loaded for partition kafka-stream-connect-cluster-configs-0 with initial high watermark 0 (kafka.cluster.Partition) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,284 INFO [Broker id=1] Leader kafka-stream-connect-cluster-configs-0 with topic id Some(cspplGF0TPuiM_XUbUycJA) starts at leader epoch 0 from offset 0 with partition epoch 0, high watermark 0, ISR [1,2,0], adding replicas [] and removing replicas [] . Previous leader None and previous leader epoch was -1. (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,299 TRACE [Broker id=1] Completed LeaderAndIsr request correlationId 10 from controller 2 epoch 1 for the become-leader transition for partition kafka-stream-connect-cluster-configs-0 (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,300 INFO [Broker id=1] Finished LeaderAndIsr request in 95ms correlationId 10 from controller 2 for 1 partitions (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,301 TRACE [Broker id=1] Cached leader info UpdateMetadataPartitionState(topicName='kafka-stream-connect-cluster-configs', partitionIndex=0, controllerEpoch=1, leader=1, leaderEpoch=0, isr=[1, 2, 0], zkVersion=0, replicas=[1, 2, 0], offlineReplicas=[]) for partition kafka-stream-connect-cluster-configs-0 in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 11 (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:01:13,301 INFO [Broker id=1] Add 1 partitions and deleted 0 partitions from metadata cache in response to UpdateMetadata request sent by controller 2 epoch 1 with correlation id 11 (state.change.logger) [control-plane-kafka-request-handler-0]
2024-01-18 08:02:37,222 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-3]
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at java.base/jdk.internal.misc.ScopedMemoryAccess.putIntUnalignedInternal(ScopedMemoryAccess.java:1884)
at java.base/jdk.internal.misc.ScopedMemoryAccess.putIntUnaligned(ScopedMemoryAccess.java:1872)
at java.base/java.nio.DirectByteBuffer.putInt(DirectByteBuffer.java:711)
at java.base/java.nio.DirectByteBuffer.putInt(DirectByteBuffer.java:723)
at org.apache.kafka.storage.internals.log.OffsetIndex.append(OffsetIndex.java:151)
at kafka.log.LogSegment.append(LogSegment.scala:168)
at kafka.log.LocalLog.append(LocalLog.scala:439)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:911)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1282)
at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.mutable.HashMap.map(HashMap.scala:35)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1270)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:873)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:153)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-01-18 08:02:37,488 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-4]
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at org.apache.kafka.storage.internals.log.OffsetIndex.append(OffsetIndex.java:152)
at kafka.log.LogSegment.append(LogSegment.scala:168)
at kafka.log.LocalLog.append(LocalLog.scala:439)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:911)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1282)
at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.mutable.HashMap.map(HashMap.scala:35)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1270)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:873)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:153)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-01-18 08:02:37,995 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-4]
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at org.apache.kafka.storage.internals.log.AbstractIndex.incrementEntries(AbstractIndex.java:411)
at org.apache.kafka.storage.internals.log.OffsetIndex.append(OffsetIndex.java:152)
at kafka.log.LogSegment.append(LogSegment.scala:168)
at kafka.log.LocalLog.append(LocalLog.scala:439)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:911)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1282)
at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.mutable.HashMap.map(HashMap.scala:35)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1270)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:873)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:153)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-01-18 08:02:39,004 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-1]
java.lang.InternalError: a fault occurred in a recent unsafe memory access operation in compiled Java code
at org.apache.kafka.storage.internals.log.OffsetIndex.append(OffsetIndex.java:155)
at kafka.log.LogSegment.append(LogSegment.scala:168)
at kafka.log.LocalLog.append(LocalLog.scala:439)
at kafka.log.UnifiedLog.append(UnifiedLog.scala:911)
at kafka.log.UnifiedLog.appendAsLeader(UnifiedLog.scala:719)
at kafka.cluster.Partition.$anonfun$appendRecordsToLeader$1(Partition.scala:1313)
at kafka.cluster.Partition.appendRecordsToLeader(Partition.scala:1301)
at kafka.server.ReplicaManager.$anonfun$appendToLocalLog$6(ReplicaManager.scala:1282)
at scala.collection.StrictOptimizedMapOps.map(StrictOptimizedMapOps.scala:28)
at scala.collection.StrictOptimizedMapOps.map$(StrictOptimizedMapOps.scala:27)
at scala.collection.mutable.HashMap.map(HashMap.scala:35)
at kafka.server.ReplicaManager.appendToLocalLog(ReplicaManager.scala:1270)
at kafka.server.ReplicaManager.appendRecords(ReplicaManager.scala:873)
at kafka.server.KafkaApis.handleProduceRequest(KafkaApis.scala:686)
at kafka.server.KafkaApis.handle(KafkaApis.scala:180)
at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:153)
at java.base/java.lang.Thread.run(Thread.java:840)
2024-01-18 08:02:41,012 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-6]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:02:45,021 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-5]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:02:53,027 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-1]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:03:09,033 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-6]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:03:41,041 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-3]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:04:41,050 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-7]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:05:41,058 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-5]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:06:41,067 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-0]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:07:41,075 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-4]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:08:41,083 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-0]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:09:41,091 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-2]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:10:41,098 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-3]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:11:41,106 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-3]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
2024-01-18 08:12:41,115 ERROR [ReplicaManager broker=1] Error processing append operation on partition kafka-stream-connect-cluster-configs-0 (kafka.server.ReplicaManager) [data-plane-kafka-request-handler-6]
org.apache.kafka.common.errors.InvalidOffsetException: Attempt to append an offset 3 to position 1 no larger than the last offset appended (3) to /var/lib/kafka/data-0/kafka-log1/kafka-stream-connect-cluster-configs-0/00000000000000000000.index
Beta Was this translation helpful? Give feedback.
All reactions