Skip to content

[Bug]: Pause GC failed: backup will continue. #47771

@rajasami156

Description

@rajasami156

Is there an existing issue for this?

  • I have searched the existing issues

Environment

- Milvus version:v2.6.11
- Deployment mode(standalone or cluster):standalone
- MQ type(rocksmq, pulsar or kafka):    woodpecker
- SDK version(e.g. pymilvus v2.0.0rc2):2.6.9
- OS(Ubuntu or CentOS): wsl 
- CPU/Memory: 16GB
- GPU: None
- Others:

Current Behavior

I'm just into 2 weeks into Milvus and I tried setting up everything.

[2026/02/13 07:50:03.219 +00:00] [WARN] [backup/gc_controller.go:193] ["Pause GC failed: backup will continue. This is not a fatal error — if pausing GC fails the backup process will still proceed because Milvus GC runs infrequently and is unlikely to affect the data being backed up.Pausing GC is only recommended for very large backups (takes > 1 hour)."] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab] [error="client: pause gc: Get "milvus-standalone/management/datacoord/garbage_collection/pause?collection_id=464247859479840310&pause_seconds=3600": unsupported protocol scheme """] [collection_id=464247859479840310]

Getting this error, Im not sure why,
This is my backup.yml i'm using by default given on docs:

'''


# Configures the system log output.
log:
  level: info # Only supports debug, info, warn, error, panic, or fatal. Default 'info'.
  console: true # whether print log to console
  file:
    filename: "logs/backup.log"

# Zilliz Cloud config.
# If you want to migrate data to Zilliz Cloud, you need to configure the following parameters.
# Otherwise, you can ignore it.
cloud:
  address: https://api.cloud.zilliz.com
  apikey: <your-api-key>

# milvus proxy address, compatible to milvus.yaml
milvus:
  address: milvus-standalone
  port: 19530
  user: "root"
  password: "sami"
  rpcChannelName: ""       # ← blank this out for woodpecker mode


  # tls mode values [0, 1, 2]
  # 0 is close, 1 is one-way authentication, 2 is mutual authentication
  tlsMode: 0
  # tls cert path for validate server, will be used when tlsMode is 1 or 2
  caCertPath: ""
  serverName: ""
  # mutual tls cert path, for server to validate client.
  # Will be used when tlsMode is 2
  # for backward compatibility, if not set, will use tlsmode 1.
  # WARN: in future version, if user set tlsmode 2, but not set mtlsCertPath, will cause error.
  mtlsCertPath: ""
  mtlsKeyPath: ""


  etcd:
    endpoints: milvus-etcd:2379  # you can set multiple endpoints, separated by comma, for example: "127.0.0.1:2379,127.0.0.1:2380,127.0.0.1:2381"
    rootPath: "by-dev"

# Related configuration of minio, which is responsible for data persistence for Milvus.
minio:
  # Milvus storage configs, make them the same with milvus config
  storageType: "minio" # support storage type: local, minio, s3, aws, gcp, ali(aliyun), azure, tc(tencent), gcpnative
  # You can use "gcpnative" for the Google Cloud Platform provider. Uses service account credentials for authentication.
  address: milvus-minio # Address of MinIO/S3
  port: 9000   # Port of MinIO/S3
  region: ""      # region of MinIO/S3
  accessKeyID: samiullah  # accessKeyID of MinIO/S3
  secretAccessKey: melior # MinIO/S3 encryption string
  token: ""     # token of MinIO/S3
  gcpCredentialJSON: "/path/to/json-key-file" # The JSON content contains the gcs service account credentials.
  # Used only for the "gcpnative" cloud provider.
  useSSL: false # Access to MinIO/S3 with SSL
  useIAM: false
  iamEndpoint: ""
  bucketName: "a-bucket" # Milvus Bucket name in MinIO/S3, make it the same as your milvus instance
  rootPath: "files" # Milvus storage root path in MinIO/S3, make it the same as your milvus instance

  # Backup storage configs, the storage you want to put the backup data
  backupStorageType: "minio" # support storage type: local, minio, s3, aws, gcp, ali(aliyun), azure, tc(tencent)
  backupAddress: milvus-minio # Address of MinIO/S3
  backupRegion: ""   # region of MinIO/S3
  backupPort: 9000   # Port of MinIO/S3
  backupAccessKeyID: samiullah  # accessKeyID of MinIO/S3
  backupSecretAccessKey: melior # MinIO/S3 encryption string
  backupToken: ""       # token of MinIO/S3
  backupGcpCredentialJSON: "/path/to/json-key-file" # The JSON content contains the gcs service account credentials.
  # Used only for the "gcpnative" cloud provider.
  backupBucketName: "a-bucket-backup" # Bucket name to store backup data. Backup data will store to backupBucketName/backupRootPath
  backupRootPath: "backup" # Rootpath to store backup data. Backup data will store to backupBucketName/backupRootPath
  backupUseSSL: false # Access to MinIO/S3 with SSL

  # If you need to back up or restore data between two different storage systems, direct client-side copying is not supported.
  # Set this option to true to enable data transfer through Milvus Backup.
  # Note: This option will be automatically set to true if `minio.storageType` and `minio.backupStorageType` differ.
  # However, if they are the same but belong to different services, you must manually set this option to `true`.
  crossStorage: false

  # File size threshold (MiB) above which multipart copy is used for S3-compatible storage.
  # Default is 500. GCP does not support multipart copy and will always use single copy.
  multipartCopyThresholdMiB: 500
  
backup:
  parallelism:
    # number of threads to copy data. reduce it if blocks your storage's network bandwidth
    copydata: 128

    # number of collections to backup in parallel.
    backupCollection: 4
    # number of segments to backup in parallel
    backupSegment: 1024

    # Collection level parallelism to restore
    restoreCollection: 2
    # max number of import job to run in parallel,
    # should be less than milvus's config dataCoord.import.maxImportJobNum
    importJob: 768
  
  # keep temporary files during restore, only use to debug 
  keepTempFiles: false
  
  # Pause GC during backup through Milvus Http API. 
  gcPause:
    enable: true
    seconds: 7200  
    address: "http://milvus-standalone:9091"

'''

MILVUS SERVICES

milvus-etcd:
container_name: milvus-etcd
image: quay.io/coreos/etcd:v3.5.25
environment:
- ETCD_AUTO_COMPACTION_MODE=revision
- ETCD_AUTO_COMPACTION_RETENTION=1000
- ETCD_QUOTA_BACKEND_BYTES=4294967296
- ETCD_SNAPSHOT_COUNT=50000
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/etcd:/etcd
command: etcd -advertise-client-urls=http://etcd:2379 -listen-client-urls http://0.0.0.0:2379 --data-dir /etcd
healthcheck:
test: ["CMD", "etcdctl", "endpoint", "health"]
interval: 30s
timeout: 20s
retries: 3
restart: always
networks:
- milvus

milvus-minio:
container_name: milvus-minio
image: minio/minio:RELEASE.2024-12-18T13-15-44Z
environment:
MINIO_ROOT_USER: samiullah
MINIO_ROOT_PASSWORD: melior
ports:
- "9001:9001"
- "9000:9000"
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/minio:/minio_data
command: minio server /minio_data --console-address ":9001"
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9000/minio/health/live"]
interval: 30s
timeout: 20s
retries: 3
restart: always
networks:
- milvus

milvus-standalone:
container_name: milvus-standalone
image: milvusdb/milvus:v2.6.11
command: ["milvus", "run", "standalone"]
security_opt:
- seccomp:unconfined
environment:
ETCD_ENDPOINTS: milvus-etcd:2379
MINIO_ADDRESS: milvus-minio:9000
MINIO_ACCESS_KEY_ID: samiullah
MINIO_SECRET_ACCESS_KEY: melior
MQ_TYPE: woodpecker
COMMON_SECURITY_AUTHORIZATIONENABLED: "true"
volumes:
- ${DOCKER_VOLUME_DIRECTORY:-.}/volumes/milvus:/var/lib/milvus
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:9091/healthz"]
interval: 30s
start_period: 90s
timeout: 20s
retries: 3
ports:
- "19530:19530"
- "9091:9091"
depends_on:
milvus-etcd:
condition: service_healthy
milvus-minio:
condition: service_healthy
restart: always
networks:
- milvus

---------------------------------------------------------------------------

Attu — Milvus Web UI

Access at http://localhost:8000

---------------------------------------------------------------------------

attu:
container_name: sami-milvus-attu
image: zilliz/attu:v2.6.4
environment:
MILVUS_URL: milvus-standalone:19530
ports:
- "8000:3000"
depends_on:
milvus-standalone:
condition: service_healthy
restart: always
networks:
- milvus

---------------------------------------------------------------------------

Milvus Backup Server

Access at http://localhost:8090

Requires: Dockerfile + backup.yaml in ./milvus-backup/ subdirectory

(or adjust build.context below to wherever you keep those files)

---------------------------------------------------------------------------

milvus-backup:
container_name: milvus-backup-samiullah
build:
context: ./milvus_backup_sami # directory containing Dockerfile + backup.yaml
dockerfile: Dockerfile
ports:
- "8090:8090"
depends_on:
milvus-standalone:
condition: service_healthy
restart: unless-stopped
networks:
- milvus

=============================================================================

VOLUMES

=============================================================================

volumes:
postgres-db-volume:

Milvus data volumes are bind-mounted via DOCKER_VOLUME_DIRECTORY,

so no named volumes are needed for them.

=============================================================================

NETWORKS

=============================================================================

networks:

Internal network for Airflow's own backing services (Postgres, Redis).

Not exposed to Milvus services intentionally.

airflow-backend:
driver: bridge

Shared network used by ALL Milvus services (etcd, minio, standalone,

attu, backup) AND by Airflow workers so DAGs can reach Milvus directly.

milvus:
driver: bridge
name: milvus

Expected Behavior

No response

Steps To Reproduce

Milvus Log

[2026/02/13 07:50:03.219 +00:00] [WARN] [backup/gc_controller.go:193] ["Pause GC failed: backup will continue. This is not a fatal error — if pausing GC fails the backup process will still proceed because Milvus GC runs infrequently and is unlikely to affect the data being backed up.Pausing GC is only recommended for very large backups (takes > 1 hour)."] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab] [error="client: pause gc: Get "milvus-standalone/management/datacoord/garbage_collection/pause?collection_id=464247859479840310&pause_seconds=3600": unsupported protocol scheme """] [collection_id=464247859479840310]
[error="backup: call replicate message failed: client: replicate message: client: operation failed: error_code:UnexpectedError reason:"service unavailable: not supported in streaming mode" code:2 retriable:true detail:"service unavailable: not supported in streaming mode""]
2026/02/13 07:50:04.249 +00:00] [INFO] [backup/task.go:433] ["skip backup index extra info"] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab]
[2026/02/13 07:50:04.249 +00:00] [INFO] [backup/task.go:452] ["start write meta"] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab]
[2026/02/13 07:50:04.530 +00:00] [INFO] [backup/task.go:478] ["finish write meta"] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab]
[2026/02/13 07:50:04.530 +00:00] [INFO] [backup/task.go:215] ["backup successfully"] [task_id=1343c8b7-0289-4c6a-b636-79ae490572ab]

Anything else?

No response

Metadata

Metadata

Assignees

Labels

kind/bugIssues or changes related a bugtriage/needs-informationIndicates an issue needs more information in order to work on it.

Type

No type

Projects

No projects

Relationships

None yet

Development

No branches or pull requests

Issue actions