-
Notifications
You must be signed in to change notification settings - Fork 6.9k
Closed
Labels
bugSomething isn't workingSomething isn't workingtriage/pendingThis issue needs further triage to be correctly classifiedThis issue needs further triage to be correctly classified
Description
Checklist:
- I've searched in the docs and FAQ for my answer: https://bit.ly/argocd-faq.
- I've included steps to reproduce the bug.
- I've pasted the output of
argocd version.
Describe the bug
PersistentVolumes created by ArgoCD via ApplicationSet cannot bind to PersistentVolumeClaims, while the exact same PV YAML applied manually with kubectl apply binds immediately.
To Reproduce
- Create an ApplicationSet that deploys a PV via Helm template
- ArgoCD syncs and creates the PV (status:
Available) - Apply a PVC with
volumeNamepointing to the PV - PVC remains
Pendingindefinitely - no events, no errors
Test: Manual apply works
# Export the PV created by ArgoCD
kubectl get pv my-pv -o yaml > pv.yaml
# Delete the PV
kubectl delete pv my-pv
# Apply the exact same YAML manually
kubectl apply -f pv.yaml
# Apply the PVC
kubectl apply -f pvc.yaml
# Result: PVC binds immediately
What I tested (all failed)
selfHeal: falseselfHeal: truewithignoreDifferenceson/spec/claimRefServerSideApply=trueReplace=trueRespectIgnoreDifferences=true- Remove
claimReffrom template entirely
Expected behavior
PVC should bind to the PV automatically since:
- PV has no
claimRef(or has matching claimRef) - PVC has
volumeNamepointing to the PV - Access modes match
- Capacity is sufficient
Actual Behavior
PVC stays in Pending status. No events are generated on the PVC.
Version
argocd: v3.1.9+8665140
BuildDate: 2025-10-17T22:07:41Z
GitCommit: 8665140f96f6b238a20e578dba7f9aef91ddac51
GitTreeState: clean
GoVersion: go1.24.6
Compiler: gc
Platform: linux/amd64
argocd-server: v3.1.9+8665140
BuildDate: 2025-10-17T21:35:08Z
GitCommit: 8665140f96f6b238a20e578dba7f9aef91ddac51
GitTreeState: clean
GoVersion: go1.24.6
Compiler: gc
Platform: linux/amd64
Kustomize Version: v5.7.0 2025-06-28T07:00:07Z
Helm Version: v3.18.4+gd80839c
Kubectl Version: v0.33.1
Jsonnet Version: v0.21.0Kubernetes version
v1.31.9.
Configuration
ApplicationSet
apiVersion: argoproj.io/v1alpha1
kind: ApplicationSet
metadata:
name: storage-applications
namespace: argocd
spec:
generators:
- git:
files:
- path: clusters/*/storage/*/values.yaml
template:
spec:
ignoreDifferences:
- group: ""
kind: PersistentVolume
jsonPointers:
- /spec/claimRef
syncPolicy:
automated:
prune: true
selfHeal: true
syncOptions:
- ServerSideApply=true
PV Template:
apiVersion: v1
kind: PersistentVolume
metadata:
name: {{ .name }}
spec:
accessModes:
- ReadWriteMany
capacity:
storage: {{ .storage }}Gi
nfs:
path: {{ .path }}
server: {{ .server }}
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
PVC (applied by user):
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-pvc
namespace: my-namespace
spec:
volumeName: my-pv
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Question
- Why does the same PV YAML bind when applied manually but not when created by ArgoCD?
- Is there a recommended way to manage static PersistentVolumes with ArgoCD that allows PVC binding?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't workingtriage/pendingThis issue needs further triage to be correctly classifiedThis issue needs further triage to be correctly classified