Required assistance for mirror maker 2 manifest, for supporting kyverno policy init container which is created by strimzi operator CR. #12275
Replies: 1 comment · 11 replies
-
|
A good start would be to:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
|
yes, it was secret issues, i fixed it apiVersion: v1
items:
- apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker2
metadata:
annotations:
meta.helm.sh/release-name: dtrc-strimzi-mm2
meta.helm.sh/release-namespace: dtrc
creationTimestamp: "2025-12-23T11:35:40Z"
generation: 3
labels:
app: strimzi-kafka-mirrormaker2
app.kubernetes.io/managed-by: Helm
helm.toolkit.fluxcd.io/name: kafka-mirror-maker2
helm.toolkit.fluxcd.io/namespace: schiff-tenant
strimzi.io/cluster: dtrc-strimzi
name: dtrc-strimzi-kafka-mm2
namespace: dtrc
resourceVersion: "1447166672"
uid: 7063a6f4-eded-4973-b5aa-6f6289ea528a
spec:
clusters:
- alias: source-cluster
authentication:
passwordSecret:
password: password
secretName: ap-strimzi-kafka-user-source-test
type: scram-sha-512
username: ap-strimzi-kafka-user-source-test
bootstrapServers: 10.100.221.89:9010
config:
sasl.mechanism: SCRAM-SHA-512
security.protocol: SASL_PLAINTEXT
- alias: target-cluster
authentication:
passwordSecret:
password: password
secretName: ap-strimzi-kafka-user-kafka-mm2
type: scram-sha-512
username: ap-strimzi-kafka-user-kafka-mm2
bootstrapServers: 10.100.90.0:9010
config:
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
sasl.mechanism: SCRAM-SHA-512
security.protocol: SASL_PLAINTEXT
status.storage.replication.factor: -1
connectCluster: target-cluster
mirrors:
- checkpointConnector:
config:
checkpoints.topic.replication.factor: -1
refresh.groups.interval.seconds: 2
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sync.group.offsets.enabled: "true"
tasksMax: 3
groupsPattern: .*
heartbeatConnector:
config:
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sourceCluster: source-cluster
sourceConnector:
config:
offset-syncs.topic.replication.factor: -1
refresh.topics.interval.seconds: 2
replication.factor: -1
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sync.topic.acls.enabled: "true"
topic.blacklist: source-cluster.checkpoints.internal, source-cluster.cps-data-updated-events,
source-cluster.dmi-cm-events, source-cluster.dmi-ncmp-cm-avc-subscription,
source-cluster.ncmp-async-m2m, source-cluster.ncmp-dmi-cm-avc-subscription-ncmp-dmi-plugin,
source-cluster.strimzi.cruisecontrol.metrics, source-cluster.strimzi.cruisecontrol.modeltrainingsamples,
source-cluster.strimzi.cruisecontrol.partitionmetricsamples, mirrormaker2-cluster-configs,
mirrormaker2-cluster-offsets, mirrormaker2-cluster-status, mm2-offset-syncs.target-cluster.internal,
strimzi.cruisecontrol.metrics, strimzi.cruisecontrol.modeltrainingsamples,
strimzi.cruisecontrol.partitionmetricsamples
topics: .*
transforms: dropPrefix
transforms.dropPrefix.regex: source-cluster\\.(.*)
transforms.dropPrefix.replacement: $1
transforms.dropPrefix.type: org.apache.kafka.connect.transforms.RegexRouter
tasksMax: 3
targetCluster: target-cluster
topicsPattern: .*
replicas: 1
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 600m
memory: 1024Mi
template:
connectContainer:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
initContainer:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
pod:
imagePullSecrets:
- name: dtrc-docker-registry-key
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
version: 3.8.0
status:
conditions:
- lastTransitionTime: "2025-12-29T03:05:20.329673820Z"
status: "True"
type: Ready
connectors:
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorCheckpointConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorHeartbeatConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorSourceConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
- id: 1
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
- id: 2
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
labelSelector: strimzi.io/cluster=dtrc-strimzi-kafka-mm2,strimzi.io/name=dtrc-strimzi-kafka-mm2-mirrormaker2,strimzi.io/kind=KafkaMirrorMaker2
observedGeneration: 3
replicas: 1
url: http://dtrc-strimzi-kafka-mm2-mirrormaker2-api.dtrc.svc:8083
- apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker2
metadata:
annotations:
meta.helm.sh/release-name: dtrc-strimzi-mm2
meta.helm.sh/release-namespace: dtrc
creationTimestamp: "2025-12-23T11:35:40Z"
generation: 3
labels:
app: strimzi-kafka-mirrormaker2
app.kubernetes.io/managed-by: Helm
helm.toolkit.fluxcd.io/name: kafka-mirror-maker2
helm.toolkit.fluxcd.io/namespace: schiff-tenant
strimzi.io/cluster: dtrc-strimzi
name: dtrc-strimzi-kafka-mm2
namespace: dtrc
resourceVersion: "1447166672"
uid: 7063a6f4-eded-4973-b5aa-6f6289ea528a
spec:
clusters:
- alias: source-cluster
authentication:
passwordSecret:
password: password
secretName: ap-strimzi-kafka-user-source-test
type: scram-sha-512
username: ap-strimzi-kafka-user-source-test
bootstrapServers: 10.100.221.89:9010
config:
sasl.mechanism: SCRAM-SHA-512
security.protocol: SASL_PLAINTEXT
- alias: target-cluster
authentication:
passwordSecret:
password: password
secretName: ap-strimzi-kafka-user-kafka-mm2
type: scram-sha-512
username: ap-strimzi-kafka-user-kafka-mm2
bootstrapServers: 10.100.90.0:9010
config:
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
sasl.mechanism: SCRAM-SHA-512
security.protocol: SASL_PLAINTEXT
status.storage.replication.factor: -1
connectCluster: target-cluster
mirrors:
- checkpointConnector:
config:
checkpoints.topic.replication.factor: -1
refresh.groups.interval.seconds: 2
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sync.group.offsets.enabled: "true"
tasksMax: 3
groupsPattern: .*
heartbeatConnector:
config:
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sourceCluster: source-cluster
sourceConnector:
config:
offset-syncs.topic.replication.factor: -1
refresh.topics.interval.seconds: 2
replication.factor: -1
replication.policy.class: org.apache.kafka.connect.mirror.IdentityReplicationPolicy
sync.topic.acls.enabled: "true"
topic.blacklist: source-cluster.checkpoints.internal, source-cluster.cps-data-updated-events,
source-cluster.dmi-cm-events, source-cluster.dmi-ncmp-cm-avc-subscription,
source-cluster.ncmp-async-m2m, source-cluster.ncmp-dmi-cm-avc-subscription-ncmp-dmi-plugin,
source-cluster.strimzi.cruisecontrol.metrics, source-cluster.strimzi.cruisecontrol.modeltrainingsamples,
source-cluster.strimzi.cruisecontrol.partitionmetricsamples, mirrormaker2-cluster-configs,
mirrormaker2-cluster-offsets, mirrormaker2-cluster-status, mm2-offset-syncs.target-cluster.internal,
strimzi.cruisecontrol.metrics, strimzi.cruisecontrol.modeltrainingsamples,
strimzi.cruisecontrol.partitionmetricsamples
topics: .*
transforms: dropPrefix
transforms.dropPrefix.regex: source-cluster\\.(.*)
transforms.dropPrefix.replacement: $1
transforms.dropPrefix.type: org.apache.kafka.connect.transforms.RegexRouter
tasksMax: 3
targetCluster: target-cluster
topicsPattern: .*
replicas: 1
resources:
limits:
cpu: 2
memory: 2048Mi
requests:
cpu: 600m
memory: 1024Mi
template:
connectContainer:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
initContainer:
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
pod:
imagePullSecrets:
- name: dtrc-docker-registry-key
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
version: 3.8.0
status:
conditions:
- lastTransitionTime: "2025-12-29T03:05:20.329673820Z"
status: "True"
type: Ready
connectors:
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorCheckpointConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorHeartbeatConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
- connector:
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
name: source-cluster->target-cluster.MirrorSourceConnector
tasks:
- id: 0
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
- id: 1
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
- id: 2
state: RUNNING
worker_id: dtrc-strimzi-kafka-mm2-mirrormaker2-0.dtrc-strimzi-kafka-mm2-mirrormaker2.dtrc.svc:8083
type: source
labelSelector: strimzi.io/cluster=dtrc-strimzi-kafka-mm2,strimzi.io/name=dtrc-strimzi-kafka-mm2-mirrormaker2,strimzi.io/kind=KafkaMirrorMaker2
observedGeneration: 3
replicas: 1
url: http://dtrc-strimzi-kafka-mm2-mirrormaker2-api.dtrc.svc:8083
kind: List
metadata:
resourceVersion: ""but still same warning persist for init container., could you please suggest. |
Beta Was this translation helpful? Give feedback.
All reactions
-
|
Strange. The configuration looks good otherwise. Can you:
|
Beta Was this translation helpful? Give feedback.
All reactions
-
|
below is manifest , can this help kubectl get pods dtrc-strimzi-kafka-mm2-mirrormaker2-0 -n dtrc -o yaml apiVersion: v1
kind: Pod
metadata:
annotations:
istio.io/rev: default
kubectl.kubernetes.io/default-container: dtrc-strimzi-kafka-mm2-mirrormaker2
kubectl.kubernetes.io/default-logs-container: dtrc-strimzi-kafka-mm2-mirrormaker2
prometheus.io/path: /stats/prometheus
prometheus.io/port: "15020"
prometheus.io/scrape: "true"
sidecar.istio.io/status: '{"initContainers":["istio-init","istio-proxy"],"containers":null,"volumes":["workload-socket","credential-socket","workload-certs","istio-envoy","istio-data","istio-podinfo","istio-token","istiod-ca-cert"],"imagePullSecrets":["tnap-docker-registry-key"],"revision":"default"}'
strimzi.io/auth-hash: "1403824566"
strimzi.io/logging-hash: 06ee78c4
strimzi.io/revision: 5113912a
creationTimestamp: "2025-12-24T14:25:20Z"
generation: 1
labels:
app: strimzi-kafka-mirrormaker2
app.kubernetes.io/instance: dtrc-strimzi-kafka-mm2
app.kubernetes.io/managed-by: strimzi-cluster-operator
app.kubernetes.io/name: kafka-mirror-maker-2
app.kubernetes.io/part-of: strimzi-dtrc-strimzi-kafka-mm2
helm.toolkit.fluxcd.io/name: kafka-mirror-maker2
helm.toolkit.fluxcd.io/namespace: schiff-tenant
security.istio.io/tlsMode: istio
service.istio.io/canonical-name: kafka-mirror-maker-2
service.istio.io/canonical-revision: latest
statefulset.kubernetes.io/pod-name: dtrc-strimzi-kafka-mm2-mirrormaker2-0
strimzi.io/cluster: dtrc-strimzi-kafka-mm2
strimzi.io/component-type: kafka-mirror-maker-2
strimzi.io/controller: strimzipodset
strimzi.io/controller-name: dtrc-strimzi-kafka-mm2-mirrormaker2
strimzi.io/kind: KafkaMirrorMaker2
strimzi.io/name: dtrc-strimzi-kafka-mm2-mirrormaker2
strimzi.io/pod-name: dtrc-strimzi-kafka-mm2-mirrormaker2-0
name: dtrc-strimzi-kafka-mm2-mirrormaker2-0
namespace: dtrc
ownerReferences:
- apiVersion: core.strimzi.io/v1beta2
blockOwnerDeletion: false
controller: true
kind: StrimziPodSet
name: dtrc-strimzi-kafka-mm2-mirrormaker2
uid: d3f7f0ea-8cc1-4175-98e1-2d7476f4d637
resourceVersion: "1433415034"
uid: ee7df8b5-f9cf-463a-8574-2b848586239d
spec:
affinity: {}
containers:
- args:
- /opt/kafka/kafka_mirror_maker_2_run.sh
env:
- name: KAFKA_CONNECT_CONFIGURATION
value: |
config.storage.topic=mirrormaker2-cluster-configs
group.id=mirrormaker2-cluster
status.storage.topic=mirrormaker2-cluster-status
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
offset.storage.topic=mirrormaker2-cluster-offsets
config.providers=file
value.converter=org.apache.kafka.connect.converters.ByteArrayConverter
key.converter=org.apache.kafka.connect.converters.ByteArrayConverter
header.converter=org.apache.kafka.connect.converters.ByteArrayConverter
config.storage.replication.factor=-1
offset.storage.replication.factor=-1
status.storage.replication.factor=-1
- name: KAFKA_CONNECT_METRICS_ENABLED
value: "false"
- name: KAFKA_CONNECT_BOOTSTRAP_SERVERS
value: 10.100.90.0:9010
- name: STRIMZI_KAFKA_GC_LOG_ENABLED
value: "false"
- name: STRIMZI_DYNAMIC_HEAP_PERCENTAGE
value: "75"
- name: KAFKA_CONNECT_SASL_USERNAME
value: ap-strimzi-kafka-user-kafka-mm2
- name: KAFKA_CONNECT_SASL_PASSWORD_FILE
value: ap-strimzi-kafka-user-kafka-mm2/password
- name: KAFKA_CONNECT_SASL_MECHANISM
value: scram-sha-512
- name: KAFKA_MIRRORMAKER_2_CLUSTERS
value: source-cluster;target-cluster
- name: KAFKA_MIRRORMAKER_2_SASL_PASSWORD_FILES_CLUSTERS
value: |-
source-cluster=ap-strimzi-kafka-user-source-test/password
target-cluster=ap-strimzi-kafka-user-kafka-mm2/password
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/quay.io/strimzi/kafka:0.44.0-kafka-3.8.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /app-health/dtrc-strimzi-kafka-mm2-mirrormaker2/livez
port: 15020
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: dtrc-strimzi-kafka-mm2-mirrormaker2
ports:
- containerPort: 8083
name: rest-api
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /app-health/dtrc-strimzi-kafka-mm2-mirrormaker2/readyz
port: 15020
scheme: HTTP
initialDelaySeconds: 60
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 600m
memory: 1Gi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: true
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /tmp
name: strimzi-tmp
- mountPath: /opt/kafka/custom-config/
name: kafka-metrics-and-logging
- mountPath: /opt/kafka/connect-password/ap-strimzi-kafka-user-kafka-mm2
name: ap-strimzi-kafka-user-kafka-mm2
- mountPath: /opt/kafka/mm2-password/source-cluster/ap-strimzi-kafka-user-source-test
name: source-cluster-ap-strimzi-kafka-user-source-test
- mountPath: /opt/kafka/mm2-password/target-cluster/ap-strimzi-kafka-user-kafka-mm2
name: target-cluster-ap-strimzi-kafka-user-kafka-mm2
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
hostname: dtrc-strimzi-kafka-mm2-mirrormaker2-0
imagePullSecrets:
- name: tnap-docker-registry-key
- name: dtrc-docker-registry-key
initContainers:
- args:
- -p
- "15001"
- -z
- "15006"
- -u
- "1337"
- -m
- REDIRECT
- -i
- '*'
- -x
- ""
- -b
- '*'
- -d
- 15090,15021,15020
- --log_output_level=default:info
command:
- /bin/sh
- -c
- |
update-alternatives --set iptables /usr/sbin/iptables-nft;
update-alternatives --set ip6tables /usr/sbin/ip6tables-nft;
exec /usr/local/bin/pilot-agent istio-iptables "$@"
env:
- name: ISTIO_DUAL_STACK
value: "true"
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2:1.23.1
imagePullPolicy: IfNotPresent
name: istio-init
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_ADMIN
- NET_RAW
drop:
- ALL
- CAP_NET_RAW
privileged: false
readOnlyRootFilesystem: false
runAsGroup: 0
runAsNonRoot: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
- args:
- proxy
- sidecar
- --domain
- $(POD_NAMESPACE).svc.cluster.local
- --proxyLogLevel=warning
- --proxyComponentLogLevel=misc:error
- --log_output_level=default:info
env:
- name: PILOT_CERT_PROVIDER
value: istiod
- name: CA_ADDR
value: istiod.dtrc-system.svc:15012
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: INSTANCE_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.serviceAccountName
- name: HOST_IP
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: status.hostIP
- name: ISTIO_CPU_LIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: PROXY_CONFIG
value: |
{"discoveryAddress":"istiod.dtrc-system.svc:15012","tracing":{"zipkin":{"address":"jaeger-collector.istio-config:9411"},"sampling":100},"proxyMetadata":{"ISTIO_DUAL_STACK":"true"}}
- name: ISTIO_META_POD_PORTS
value: |-
[
{"name":"rest-api","containerPort":8083,"protocol":"TCP"}
]
- name: ISTIO_META_APP_CONTAINERS
value: dtrc-strimzi-kafka-mm2-mirrormaker2
- name: GOMEMLIMIT
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.memory
- name: GOMAXPROCS
valueFrom:
resourceFieldRef:
divisor: "0"
resource: limits.cpu
- name: ISTIO_META_CLUSTER_ID
value: Kubernetes
- name: ISTIO_META_NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ISTIO_META_INTERCEPTION_MODE
value: REDIRECT
- name: ISTIO_META_WORKLOAD_NAME
value: dtrc-strimzi-kafka-mm2-mirrormaker2-0
- name: ISTIO_META_OWNER
value: kubernetes://apis/v1/namespaces/dtrc/pods/dtrc-strimzi-kafka-mm2-mirrormaker2-0
- name: ISTIO_META_MESH_ID
value: cluster.local
- name: TRUST_DOMAIN
value: cluster.local
- name: ISTIO_DUAL_STACK
value: "true"
- name: ISTIO_KUBE_APP_PROBERS
value: '{"/app-health/dtrc-strimzi-kafka-mm2-mirrormaker2/livez":{"httpGet":{"path":"/","port":8083,"scheme":"HTTP"},"timeoutSeconds":5},"/app-health/dtrc-strimzi-kafka-mm2-mirrormaker2/readyz":{"httpGet":{"path":"/","port":8083,"scheme":"HTTP"},"timeoutSeconds":5}}'
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2:1.23.1
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- pilot-agent
- request
- --debug-port=15020
- POST
- drain
name: istio-proxy
ports:
- containerPort: 15090
name: http-envoy-prom
protocol: TCP
readinessProbe:
failureThreshold: 4
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
periodSeconds: 15
successThreshold: 1
timeoutSeconds: 3
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
restartPolicy: Always
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
- CAP_NET_RAW
privileged: false
readOnlyRootFilesystem: true
runAsGroup: 1337
runAsNonRoot: true
runAsUser: 1337
startupProbe:
failureThreshold: 600
httpGet:
path: /healthz/ready
port: 15021
scheme: HTTP
periodSeconds: 1
successThreshold: 1
timeoutSeconds: 3
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /var/run/secrets/workload-spiffe-uds
name: workload-socket
- mountPath: /var/run/secrets/credential-uds
name: credential-socket
- mountPath: /var/run/secrets/workload-spiffe-credentials
name: workload-certs
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
nodeName: dtrc-2-pool-tst-2-vrqz2-qpkrr
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
fsGroup: 1000
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
serviceAccount: dtrc-strimzi-kafka-mm2-mirrormaker2
serviceAccountName: dtrc-strimzi-kafka-mm2-mirrormaker2
subdomain: dtrc-strimzi-kafka-mm2-mirrormaker2
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
tolerationSeconds: 300
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
tolerationSeconds: 300
volumes:
- emptyDir:
sizeLimit: 64Mi
name: workload-socket
- emptyDir:
sizeLimit: 64Mi
name: credential-socket
- emptyDir:
sizeLimit: 64Mi
name: workload-certs
- emptyDir:
medium: Memory
sizeLimit: 64Mi
name: istio-envoy
- emptyDir:
sizeLimit: 64Mi
name: istio-data
- downwardAPI:
defaultMode: 420
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.labels
path: labels
- fieldRef:
apiVersion: v1
fieldPath: metadata.annotations
path: annotations
name: istio-podinfo
- name: istio-token
projected:
defaultMode: 420
sources:
- serviceAccountToken:
audience: istio-ca
expirationSeconds: 43200
path: istio-token
- configMap:
defaultMode: 420
name: istio-ca-root-cert
name: istiod-ca-cert
- emptyDir:
medium: Memory
sizeLimit: 5Mi
name: strimzi-tmp
- configMap:
defaultMode: 420
name: dtrc-strimzi-kafka-mm2-mirrormaker2-config
name: kafka-metrics-and-logging
- name: ap-strimzi-kafka-user-kafka-mm2
secret:
defaultMode: 292
secretName: ap-strimzi-kafka-user-kafka-mm2
- name: source-cluster-ap-strimzi-kafka-user-source-test
secret:
defaultMode: 292
secretName: ap-strimzi-kafka-user-source-test
- name: target-cluster-ap-strimzi-kafka-user-kafka-mm2
secret:
defaultMode: 292
secretName: ap-strimzi-kafka-user-kafka-mm2
- name: kube-api-access-zlq4g
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2025-12-24T14:25:23Z"
status: "True"
type: PodReadyToStartContainers
- lastProbeTime: null
lastTransitionTime: "2025-12-24T14:25:25Z"
status: "True"
type: Initialized
- lastProbeTime: null
lastTransitionTime: "2025-12-24T14:26:28Z"
status: "True"
type: Ready
- lastProbeTime: null
lastTransitionTime: "2025-12-24T14:26:28Z"
status: "True"
type: ContainersReady
- lastProbeTime: null
lastTransitionTime: "2025-12-24T14:25:20Z"
status: "True"
type: PodScheduled
containerStatuses:
- allocatedResources:
cpu: 600m
memory: 1Gi
containerID: containerd://ed3325d6e466f7999dda92e9732d3b6099ed808ee067337c5df2aaf4be10b2eb
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/quay.io/strimzi/kafka:0.44.0-kafka-3.8.0
imageID: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/quay.io/strimzi/kafka@sha256:44b298e996ae774ff1cc85063944b65b21135b82750e4cf12b47a4192535d87b
lastState: {}
name: dtrc-strimzi-kafka-mm2-mirrormaker2
ready: true
resources:
limits:
cpu: "2"
memory: 2Gi
requests:
cpu: 600m
memory: 1Gi
restartCount: 0
started: true
state:
running:
startedAt: "2025-12-24T14:25:25Z"
user:
linux:
gid: 1000
supplementalGroups:
- 1000
uid: 1000
volumeMounts:
- mountPath: /tmp
name: strimzi-tmp
- mountPath: /opt/kafka/custom-config/
name: kafka-metrics-and-logging
- mountPath: /opt/kafka/connect-password/ap-strimzi-kafka-user-kafka-mm2
name: ap-strimzi-kafka-user-kafka-mm2
- mountPath: /opt/kafka/mm2-password/source-cluster/ap-strimzi-kafka-user-source-test
name: source-cluster-ap-strimzi-kafka-user-source-test
- mountPath: /opt/kafka/mm2-password/target-cluster/ap-strimzi-kafka-user-kafka-mm2
name: target-cluster-ap-strimzi-kafka-user-kafka-mm2
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
recursiveReadOnly: Disabled
hostIP: 2a01:598:7e0:142::6
hostIPs:
- ip: 2a01:598:7e0:142::6
- ip: 10.100.90.70
initContainerStatuses:
- allocatedResources:
cpu: 100m
memory: 128Mi
containerID: containerd://1b3eba2088e9eded74e93f14605bba338943dc8b849527111d365dbe67355e09
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2:1.23.1
imageID: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2@sha256:cc335d0d284fa47fa1c0fe4ed229497f8e9898615a61ddfcaf782be997aca141
lastState: {}
name: istio-init
ready: true
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
restartCount: 0
started: false
state:
terminated:
containerID: containerd://1b3eba2088e9eded74e93f14605bba338943dc8b849527111d365dbe67355e09
exitCode: 0
finishedAt: "2025-12-24T14:25:23Z"
reason: Completed
startedAt: "2025-12-24T14:25:23Z"
user:
linux:
gid: 0
supplementalGroups:
- 0
- 1000
uid: 0
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
recursiveReadOnly: Disabled
- allocatedResources:
cpu: 100m
memory: 128Mi
containerID: containerd://61ac56eed0fa736948e5411aae1ca7d9c48a0163d17f383953f57a4bb29a6f6e
image: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2:1.23.1
imageID: artifactory.yard-bootstrap-ztn.dev-test.example.net/dtt-dtrc-dtrc_lab-lab-docker/istio/proxyv2@sha256:cc335d0d284fa47fa1c0fe4ed229497f8e9898615a61ddfcaf782be997aca141
lastState: {}
name: istio-proxy
ready: true
resources:
limits:
cpu: "2"
memory: 1Gi
requests:
cpu: 100m
memory: 128Mi
restartCount: 0
started: true
state:
running:
startedAt: "2025-12-24T14:25:23Z"
user:
linux:
gid: 1337
supplementalGroups:
- 1000
- 1337
uid: 1337
volumeMounts:
- mountPath: /var/run/secrets/workload-spiffe-uds
name: workload-socket
- mountPath: /var/run/secrets/credential-uds
name: credential-socket
- mountPath: /var/run/secrets/workload-spiffe-credentials
name: workload-certs
- mountPath: /var/run/secrets/istio
name: istiod-ca-cert
- mountPath: /var/lib/istio/data
name: istio-data
- mountPath: /etc/istio/proxy
name: istio-envoy
- mountPath: /var/run/secrets/tokens
name: istio-token
- mountPath: /etc/istio/pod
name: istio-podinfo
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-zlq4g
readOnly: true
recursiveReadOnly: Disabled
phase: Running
podIP: 2a01:598:7e0:143::150
podIPs:
- ip: 2a01:598:7e0:143::150
- ip: 172.30.1.80
qosClass: Burstable
startTime: "2025-12-24T14:25:20Z" |
Beta Was this translation helpful? Give feedback.
All reactions
-
|
Well, from the Pod it looks like Strimzi did exactly what you asked for. The Strimzi Pod as well as container have the security context you configured there. Or what exactly are the parts where it is configured incorrectly? The Istio containers, they are not managed by Strimzi. They are injected by Istio. So you need to configure that somewhere in Istio. But as far as I know, Istio needs some security privileges to work. So not sure what is or is not possible. Also, please keep in mind that Strimzi does not really support or integrate with Istio. So I'm not sure whether this would work regardless of your security context configuration. |
Beta Was this translation helpful? Give feedback.
All reactions
-
|
Sure, thanks for the detailed information. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
we having below manifest template for kyverno policy , its works for pod and container, but not working for init container, below is manifest details.
`apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaMirrorMaker2
metadata:
name: {{ .Values.name }}
namespace: {{ .Values.namespace }}
labels:
strimzi.io/cluster: {{ .Values.kafka.cluster }}
app: {{ .Values.labels.app }}
{{- with .Values.labels.extra }}
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
version: {{ .Values.version }}
replicas: {{ .Values.replicas }}
connectCluster: {{ .Values.connectCluster }}
resources:
{{- if .Values.resources }}
{{- toYaml .Values.resources | nindent 4 }}
{{- else }}
requests:
cpu: "100m"
memory: "256Mi"
limits:
cpu: "500m"
memory: "512Mi"
{{- end }}
template:
pod:
imagePullSecrets:
- name: {{ .Values.imagePullSecret }}
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
runAsNonRoot: true
seccompProfile:
type: RuntimeDefault
clusters:
{{- toYaml .Values.clusters | nindent 4 }}
mirrors:
{{- toYaml .Values.mirrors | nindent 4 }}
`
we are getting below policy failure for init container.

can you please assist for the same.
Beta Was this translation helpful? Give feedback.
All reactions