This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
The Scality CSI Driver for S3 is a Kubernetes CSI driver that enables mounting Scality RING S3 buckets as persistent volumes. It uses mount-s3 (a FUSE-based filesystem) to provide POSIX-like access to S3 objects. The driver supports both static provisioning (pre-existing buckets) and dynamic provisioning (automatic bucket creation).
-
CSI Driver Node Service (
cmd/scality-csi-driver): Main CSI driver implementing NodePublishVolume/NodeUnpublishVolume RPCs. Runs as a DaemonSet on each node and handles volume mount/unmount operations. -
CSI Controller Service (
cmd/scality-csi-controller): Implements CSI Controller Service for dynamic provisioning. Handles CreateVolume/DeleteVolume RPCs and manages S3 bucket lifecycle. Includes a separate controller process that reconciles MountpointS3PodAttachment CRDs. -
CSI Mounter (
cmd/scality-csi-mounter): Helper binary that runs mount-s3 processes inside dedicated "mounter pods" for improved isolation and resource management. -
Install MP (
cmd/install-mp): Installation helper for mount-s3 binary.
pkg/driver: Core CSI driver implementation (controller, node, identity services)pkg/driver/node/mounter: Mounting logic with two strategies:- systemd mounter: Runs mount-s3 via systemd transient services (legacy)
- pod mounter: Runs mount-s3 in dedicated Kubernetes pods (default)
pkg/driver/node/credentialprovider: Credential resolution from secrets, driver defaults, or AWS profilespkg/driver/controller/credentialprovider: Controller-side credential provider for dynamic provisioningpkg/podmounter/mppod: Mounter pod creation, management, and resource calculationspkg/mountpoint: mount-s3 argument construction and process executionpkg/api/v2: CRD definitions for MountpointS3PodAttachment (tracks volume attachments)pkg/s3client: S3 client wrapper for bucket operationspkg/system: Low-level system interactions (systemd, pts, namespaces)
- Dual Mounter Strategy: The driver supports two mounting approaches. Pod mounter is enabled by default and recommended. Systemd mounter is legacy but still supported.
- Credential Resolution Chain: Credentials are resolved in order: secret-based → driver-level → AWS profile → IAM roles.
- Volume Sharing: Multiple pods can share the same S3 volume. MountpointS3PodAttachment CRD tracks these shared mounts.
- Resource Management: Mounter pods have resource requests/limits calculated based on cache size and mount options.
# Build all binaries (cross-compiles to Linux)
make bin
# Build container image (default tag: local)
make container
# Build with custom tag
make container CONTAINER_TAG=v2.0.0# Run unit tests
make unit-test
# Run unit tests with race detection and coverage
make test
# Generate coverage report
make cover
# Run CSI compliance tests (sanity tests)
make csi-compliance-test
# Run controller integration tests (uses envtest)
make controller-integration-test# Format code
make fmt
# Run linters
make lint
# Run all pre-commit hooks
make precommit# Build and serve documentation (MkDocs)
make docs
# Clean documentation artifacts
make docs-clean# Generate CRD manifests and deepcopy functions
make generateThis regenerates:
- CRD YAML in
charts/scality-mountpoint-s3-csi-driver/crds/ zz_generated.deepcopy.gofor API types
# Install CRDs directly from repository using kustomize
kubectl apply -k github.com/scality/mountpoint-s3-csi-driver
# Or install from local directory
kubectl apply -k .# Check dependency licenses
make check-licenses
# Generate license files for dependencies
make generate-licensesE2E tests require a Scality RING S3 endpoint and credentials. The E2E workflow is orchestrated via Mage targets (Makefile targets delegate to Mage internally).
# Install mage (required)
go install github.com/magefile/mage@latest
# Full workflow: load credentials, install driver, run tests
S3_ENDPOINT_URL=https://s3.example.com mage e2e:all
# Or use separate commands:
S3_ENDPOINT_URL=https://s3.example.com mage e2e:install
S3_ENDPOINT_URL=https://s3.example.com mage e2e:test
mage e2e:uninstall
# Run only Go-based e2e tests (skip verification)
S3_ENDPOINT_URL=https://s3.example.com mage e2e:goTest
# Run only verification (driver health check)
mage e2e:verify
# Uninstall options
mage e2e:uninstall # Helm uninstall + delete secret
mage e2e:uninstallClean # Also delete custom namespace
mage e2e:uninstallForce # Force uninstall + delete CSI driver registration
# With custom image (CI usage)
S3_ENDPOINT_URL=https://s3.example.com CSI_IMAGE_TAG=v2.0.0 CSI_IMAGE_REPOSITORY=ghcr.io/scality/mountpoint-s3-csi-driver mage e2e:install
# Makefile targets still work (they delegate to Mage):
# make e2e-all S3_ENDPOINT_URL=https://s3.example.com
# make csi-install S3_ENDPOINT_URL=https://s3.example.com# Run full E2E suite on OpenShift
mage e2e:openShiftAll
# Create image pull secret for GHCR
mage e2e:createPullSecret
# Apply SCCs manually (used by CI workflow)
oc apply -f .github/openshift/scc.yaml
# Configure DNS (dispatches to OpenShift path when CLUSTER_TYPE=openshift)
CLUSTER_TYPE=openshift mage e2e:configureCIDNSMage provides higher-level tasks for local development with minikube/kind:
# Build and install CSI driver from local source
mage up
# Remove CSI driver and resources
mage down
# Install specific version from OCI registry
SCALITY_CSI_VERSION=v2.0.0 mage install
# Configure/remove DNS mapping for s3.example.com
mage configureS3DNS
mage removeS3DNS
mage showS3DNSStatus- Unit tests should use fakes/mocks defined in
*test.gofiles or dedicated*/mocks/directories - Controller tests use
envtest(real Kubernetes API server) for integration testing - Mock generation uses
github.com/golang/mock
- Located in
tests/e2e/ - Use Ginkgo/Gomega testing framework
- Require real S3 infrastructure
- Test both static and dynamic provisioning scenarios
- Separate
go.modto isolate e2e dependencies
# Run specific test by name
go test -v ./pkg/driver/node/mounter -run TestPodMounter
# Run tests in specific package
go test -v ./pkg/driver/...Version is set in Makefile (VERSION=2.0.0) and injected at build time via ldflags into pkg/driver/version.
Follow conventional commit style based on repository history:
- Use prefixes:
S3CSI-XXX:for Jira ticket references - Format:
S3CSI-XXX: Brief description of change - Focus on what changed and why
- Main branch:
main - Feature branches:
feature/S3CSI-XXX-description - Improvement branches:
improvement/S3CSI-XXX-description
- Platform-specific files use build tags:
//go:build linuxor//go:build darwin - Always provide Darwin stubs for Linux-only functionality to support local development on macOS
- CI runs with
GOOS=linuxto ensure Linux-specific code is analyzed
- Chart location:
charts/scality-mountpoint-s3-csi-driver/ - Values can be customized for image repository, tag, resources, etc.
- CRDs are included in
crds/subdirectory
CSI_NODE_NAME: Node name for CSI driverMOUNTPOINT_VERSION: Mount-s3 version to reportMOUNTPOINT_NAMESPACE: Namespace for mounter podsS3_ENDPOINT_URL: S3 endpoint URL (required for e2e tests)ACCOUNT1_ACCESS_KEY/ACCOUNT1_SECRET_KEY: S3 credentials (loaded automatically bymage e2e:allfromintegration_config.json)
- Check pod logs:
kubectl logs -n kube-system <pod-name> - Check systemd services (legacy mounter):
systemctl status mount-s3-* - Check mounter pods:
kubectl get pods -n kube-system -l app=mountpoint-s3-csi-mounter - Check CRDs:
kubectl get mountpoints3podattachmentsorkubectl get s3pa - Enable debug logging via mount option:
--log-level debug
Makefile: Primary build and test commandsmagefiles/: Mage-based development workflow.pre-commit-config.yaml: Pre-commit hooks configuration.golangci.yaml: Linter configurationmkdocs.yml: Documentation site configurationintegration_config.json: E2E test credentials (not in repo, user-provided)