Unable to write to HA HDFS with confluent HDFS connector and getting Could not initialize class org.apache.hadoop.crypto.key.kms.KMSClientProvider error. #4189
Unanswered
raviteja628
asked this question in
Q&A
Replies: 1 comment 2 replies
-
I have no experience with HDFS or the HDFS connector. So I'm not sure how much I can help. The error |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hi @scholzj @tombentley ,
I am trying to write the data from strimzi kafka topics to HA HDFS using confluent HDFS connector . I am able to generate the json. file on HDFS but not able to write any data into it due to the below error.

Below is the Error stackstrace .
Name: dashboard-hdfs-sinkconnector
Namespace: rd-dci-usage-kafka
Labels: strimzi.io/cluster=rd-dci-usage
Annotations:
API Version: kafka.strimzi.io/v1alpha1
Kind: KafkaConnector
Metadata:
Creation Timestamp: 2021-01-06T09:29:21Z
Generation: 1
Resource Version: 367686589
Self Link: /apis/kafka.strimzi.io/v1alpha1/namespaces/rd-dci-usage-kafka/kafkaconnectors/dashboard-hdfs-sinkconnector
UID: a8a1fad8-5001-11eb-8c98-005056a78d82
Spec:
Class: io.confluent.connect.hdfs.HdfsSinkConnector
Config:
connect.hdfs.keytab: /etc/functional-keytab-file1.keytab
connect.hdfs.principal: [email protected]
flush.size: 1
format.class: io.confluent.connect.hdfs.json.JsonFormat
hadoop.conf.dir: /etc/hadoop/conf
hadoop.home: /local/apps/cloudera/parcels/local/apps/cloudera/parcels/CDH-5.15.1-1.cdh5.15.1.p3321.3568/lib/hadoop
hdfs.authentication.kerberos: true
hdfs.namenode.principal: hdfs/[email protected]
kerberos.ticket.renew.period.ms: 360000
key.converter: org.apache.kafka.connect.storage.StringConverter
logs.dir: ${topic}
Name: dashboard-hdfs-sinkconnector
ssl.keystore.location: /local/apps/cloudera/security/pki/ca-certs.jks
ssl.truststore.location: /local/apps/cloudera/security/pki/ca-certs.jks
store.url: hdfs://dev-scc:8020
Topics: dashboard-sample3
topics.dir: /user/xfx50776/new1json
value.converter: org.apache.kafka.connect.json.JsonConverter
value.converter.schemas.enable: false
Tasks Max: 1
Status:
Conditions:
Last Transition Time: 2021-01-06T12:12:58.914Z
Status: True
Type: Ready
Connector Status:
Connector:
State: RUNNING
worker_id: 192.168.16.178:8083
Name: dashboard-hdfs-sinkconnector
Tasks:
Id: 0
State: FAILED
Trace: org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:561)
at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:177)
at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:227)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: com.google.common.util.concurrent.ExecutionError: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.crypto.key.kms.KMSClientProvider
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2261)
at com.google.common.cache.LocalCache.get(LocalCache.java:4000)
at com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4789)
at org.apache.hadoop.hdfs.KeyProviderCache.get(KeyProviderCache.java:76)
at org.apache.hadoop.hdfs.DFSClient.getKeyProvider(DFSClient.java:2975)
at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1045)
at org.apache.hadoop.hdfs.DFSClient.createWrappedOutputStream(DFSClient.java:1028)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:478)
at org.apache.hadoop.hdfs.DistributedFileSystem$8.doCall(DistributedFileSystem.java:472)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:472)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:413)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1072)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1053)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:942)
at io.confluent.connect.hdfs.storage.HdfsStorage.create(HdfsStorage.java:95)
at io.confluent.connect.hdfs.json.JsonRecordWriterProvider$1.(JsonRecordWriterProvider.java:71)
at io.confluent.connect.hdfs.json.JsonRecordWriterProvider.getRecordWriter(JsonRecordWriterProvider.java:70)
at io.confluent.connect.hdfs.json.JsonRecordWriterProvider.getRecordWriter(JsonRecordWriterProvider.java:39)
at io.confluent.connect.hdfs.TopicPartitionWriter.getWriter(TopicPartitionWriter.java:646)
at io.confluent.connect.hdfs.TopicPartitionWriter.writeRecord(TopicPartitionWriter.java:720)
at io.confluent.connect.hdfs.TopicPartitionWriter.write(TopicPartitionWriter.java:384)
at io.confluent.connect.hdfs.DataWriter.write(DataWriter.java:386)
at io.confluent.connect.hdfs.HdfsSinkTask.put(HdfsSinkTask.java:124)
at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:539)
... 10 more
Caused by: java.lang.NoClassDefFoundError: Could not initialize class org.apache.hadoop.crypto.key.kms.KMSClientProvider
at org.apache.hadoop.crypto.key.kms.KMSClientProvider$Factory.createProvider(KMSClientProvider.java:321)
at org.apache.hadoop.crypto.key.KeyProviderFactory.get(KeyProviderFactory.java:96)
at org.apache.hadoop.util.KMSUtil.createKeyProviderFromUri(KMSUtil.java:63)
at org.apache.hadoop.hdfs.KeyProviderCache$2.call(KeyProviderCache.java:79)
at org.apache.hadoop.hdfs.KeyProviderCache$2.call(KeyProviderCache.java:76)
at com.google.common.cache.LocalCache$LocalManualCache$1.load(LocalCache.java:4792)
at com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3599)
at com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2379)
at com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2342)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2257)
... 34 more
Observed Generation: 1
This is the file generated on HDFS.

I need some help here to resolve this error.
Beta Was this translation helpful? Give feedback.
All reactions