-
Notifications
You must be signed in to change notification settings - Fork 96
Closed
Description
What happened:
When the Pod lifetime reaches around 2 hours, access to blob container lost like below:

Reproducible in the 2nd attempt:

If we delete the Pod and re-create it, it can access the blob container again:

What you expected to happen:
The connection keeps.
How to reproduce it:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Namespace
metadata:
name: ${namespace}
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: ${svc}
namespace: ${namespace}
EOFvolUniqId=${sa}#${container}#$(tr -dc a-zA-Z0-9 < /dev/urandom | head -c 4)cat <<EOF | kubectl apply -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azureblob-fuse
provisioner: blob.csi.azure.com
parameters:
skuName: Standard_LRS
reclaimPolicy: Delete
mountOptions:
- '-o allow_other'
- '--file-cache-timeout-in-seconds=120'
- '--use-attr-cache=true'
- '--cancel-list-on-mount-seconds=10'
- '-o attr_timeout=120'
- '-o entry_timeout=120'
- '-o negative_timeout=120'
- '--log-level=LOG_WARNING'
- '--cache-size-mb=1000'
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: blob.csi.azure.com
name: pv-blob-wi
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: azureblob-fuse
mountOptions:
- -o allow_other
- --file-cache-timeout-in-seconds=120
csi:
driver: blob.csi.azure.com
volumeHandle: ${volUniqId}
volumeAttributes:
storageaccount: ${sa}
containerName: ${container}
clientID: ${appClientId}
resourcegroup: ${rG2}
tenantID: ${tenant2}
subscriptionid: ${subscription2}
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: pvc-blob-wi
namespace: ${namespace}
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: pv-blob-wi
storageClassName: azureblob-fuse
EOFcat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: blobfuse-mount-1
namespace: ${namespace}
spec:
serviceAccountName: ${svc}
containers:
- name: demo
image: alpine
command: ["/bin/sh"]
args: ["-c", "while true; do cat /mnt/azure/text; sleep 5; done"]
volumeMounts:
- mountPath: /mnt/azure
name: volume
readOnly: false
volumes:
- name: volume
persistentVolumeClaim:
claimName: pvc-blob-wi
EOFAnything else we need to know?:
I am using App registration instead of managed identity here for one-to-multi tenant scenario.
Considered that I do can access data initially, the OIDC/FIDC part should be configured correctly.
Blob CSI Driver won't give any useful log:
kubectl logs csi-blob-node-gk65f -n kube-system -c blob
Environment:
- CSI Driver version: blob-csi:v1.25.5
- Kubernetes version (use
kubectl version): 1.31.7 - OS (e.g. from /etc/os-release): AKSUbuntu-2204containerd-202504.16.0
Metadata
Metadata
Assignees
Labels
No labels