-
my kafka cluster is: apiVersion: kafka.strimzi.io/v1beta2
kind: Kafka
metadata:
name: my-cluster
spec:
kafka:
version: 3.5.1
replicas: 3
listeners:
- name: plain
port: 9092
type: internal
tls: false
- name: tls
port: 9093
type: internal
tls: true
config:
offsets.topic.replication.factor: 3
transaction.state.log.replication.factor: 3
transaction.state.log.min.isr: 2
default.replication.factor: 3
min.insync.replicas: 2
inter.broker.protocol.version: "3.5"
storage:
type: jbod
volumes:
- id: 0
type: persistent-claim
size: 100Gi
deleteClaim: false
zookeeper:
replicas: 3
storage:
type: persistent-claim
size: 100Gi
deleteClaim: false
entityOperator:
topicOperator: {}
userOperator: {} and when cluster is running,i see pvc is: kubectl -n kafka-operator get pvc data-0-kafka-cluster-kafka-0 -oyaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
pv.kubernetes.io/bind-completed: "yes"
pv.kubernetes.io/bound-by-controller: "yes"
strimzi.io/delete-claim: "false"
volume.beta.kubernetes.io/storage-provisioner: cluster.local/local-path-storage-local-path-provisioner
volume.kubernetes.io/selected-node: nma07-304-d19-sev-r740-2u05
volume.kubernetes.io/storage-provisioner: cluster.local/local-path-storage-local-path-provisioner
creationTimestamp: "2023-12-04T05:59:26Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app.kubernetes.io/instance: kafka-cluster
app.kubernetes.io/managed-by: strimzi-cluster-operator
app.kubernetes.io/name: kafka
app.kubernetes.io/part-of: strimzi-kafka-cluster
strimzi.io/cluster: kafka-cluster
strimzi.io/component-type: kafka
strimzi.io/kind: Kafka
strimzi.io/name: kafka-cluster-kafka
strimzi.io/pool-name: kafka
name: data-0-kafka-cluster-kafka-0
namespace: kafka-operator
resourceVersion: "55386919"
uid: 12f4ca0d-1fa6-4824-9e62-04694bb083c8
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: local-path
volumeMode: Filesystem
volumeName: pvc-12f4ca0d-1fa6-4824-9e62-04694bb083c8
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 5Gi
phase: Bound pvc.spec.volumeMode=Filesystem, |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Apache Kafka requires a filesystem on its volumes. It cannot use raw block storage. If you want to use multiple volumes in each Kafka broker, you can add them to the storage array in the Kafka custom resource. While that is useful in many situations to improve the overall capacity and in some cases performance, you probably do not want to use multiple 5Gi volumes but for example multiple 5Ti volumes as you are responsible for the balancing of the data between the disks. |
Beta Was this translation helpful? Give feedback.
Apache Kafka requires a filesystem on its volumes. It cannot use raw block storage.
If you want to use multiple volumes in each Kafka broker, you can add them to the storage array in the Kafka custom resource. While that is useful in many situations to improve the overall capacity and in some cases performance, you probably do not want to use multiple 5Gi volumes but for example multiple 5Ti volumes as you are responsible for the balancing of the data between the disks.