Skip to content

maxParallel configuration is ignored for WAL archiving #45

@Remp69

Description

@Remp69

The maxParallel parameter configured in the Archive resource under spec.configuration.wal.maxParallel is documented as enabling parallel WAL archiving, but it appears to be ignored during the archiving process. WAL files are always archived sequentially, one at a time, regardless of the configured value.

Environment
Plugin version: v0.4.1
CloudNativePG version: 1.27.1
Kubernetes version: 1.31

Configuration
Here is the relevant section of my Archive manifest :

apiVersion: pgbackrest.cnpg.opera.com/v1
kind: Archive
metadata:
  name: ceph
spec:
  configuration:
    compression: zst
    repositories:
      - destinationPath: /mynamespace
        bucket: buc-qual
        endpointURL: *********
        disableVerifyTLS: true
        retention:
          full: 14
          fullType: time
        s3Credentials:
          region: *****
          accessKeyId:
            name: ***
            key: s3_key
          secretAccessKey:
            name: ***
            key: s3_secret_key
          uriStyle: path
    wal:
      maxParallel: 4  # This parameter is ignored for archiving
    restore:
      jobs: 4
    data:
      jobs: 4

Observed Behavior
Despite setting maxParallel: 4, the plugin logs show that WAL files are archived strictly sequentially:
Each WAL file is archived sequentially with ~1-2 seconds between them, with no parallelism observed.

{"level":"info","ts":"2025-11-25T10:04:05.249692525Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CE","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CE"]}
{"level":"info","ts":"2025-11-25T10:04:05.440540675Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CE","startTime":"2025-11-25T10:04:05.249656019Z","endTime":"2025-11-25T10:04:05.440496448Z","elapsedWalTime":0.19084047}
{"level":"info","ts":"2025-11-25T10:04:06.71830319Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CF","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CF"]}
{"level":"info","ts":"2025-11-25T10:04:06.928038713Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000CF","startTime":"2025-11-25T10:04:06.718276241Z","endTime":"2025-11-25T10:04:06.927983255Z","elapsedWalTime":0.209707056}
{"level":"info","ts":"2025-11-25T10:04:08.373272968Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D0","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D0"]}
{"level":"info","ts":"2025-11-25T10:04:08.574280186Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D0","startTime":"2025-11-25T10:04:08.373232044Z","endTime":"2025-11-25T10:04:08.574259139Z","elapsedWalTime":0.201027096}
{"level":"info","ts":"2025-11-25T10:04:09.907317253Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D1","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D1"]}
{"level":"info","ts":"2025-11-25T10:04:10.111871882Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D1","startTime":"2025-11-25T10:04:09.90727897Z","endTime":"2025-11-25T10:04:10.111843191Z","elapsedWalTime":0.204564261}
{"level":"info","ts":"2025-11-25T10:04:11.392119263Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D2","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D2"]}
{"level":"info","ts":"2025-11-25T10:04:11.61193311Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D2","startTime":"2025-11-25T10:04:11.392083167Z","endTime":"2025-11-25T10:04:11.611887926Z","elapsedWalTime":0.219804759}
{"level":"info","ts":"2025-11-25T10:04:12.901120441Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D3","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D3"]}
{"level":"info","ts":"2025-11-25T10:04:13.100524207Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D3","startTime":"2025-11-25T10:04:12.901082476Z","endTime":"2025-11-25T10:04:13.100481973Z","elapsedWalTime":0.199399537}
{"level":"info","ts":"2025-11-25T10:04:14.412523516Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D4","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D4"]}
{"level":"info","ts":"2025-11-25T10:04:14.612216102Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D4","startTime":"2025-11-25T10:04:14.412489215Z","endTime":"2025-11-25T10:04:14.612176858Z","elapsedWalTime":0.199687686}
{"level":"info","ts":"2025-11-25T10:04:15.925461724Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D5","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D5"]}
{"level":"info","ts":"2025-11-25T10:04:16.127610208Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D5","startTime":"2025-11-25T10:04:15.925430717Z","endTime":"2025-11-25T10:04:16.127573871Z","elapsedWalTime":0.202143197}
{"level":"info","ts":"2025-11-25T10:04:17.422191396Z","msg":"Executing pgbackrest archive-push","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D6","options":["--repo1-type","s3","--repo1-s3-endpoint","********","--repo1-storage-verify-tls=n","--repo1-s3-bucket","buc-qual","--repo1-path","/mynamespace","--repo1-s3-uri-style","path","--stanza","c4","archive-push","/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D6"]}
{"level":"info","ts":"2025-11-25T10:04:17.626300993Z","msg":"Archived WAL file","logging_pod":"c4-1","walName":"/var/lib/postgresql/data/pgdata/pg_wal/000000070000000C000000D6","startTime":"2025-11-25T10:04:17.422150919Z","endTime":"2025-11-25T10:04:17.626265099Z","elapsedWalTime":0.20411422}

Expected Behavior
With maxParallel: 4, I would expect up to 4 WAL files to be archived in parallel, similar to how the restore functionality works.

Code
In internal/cnpgi/common/wal.go,

  • In the Archive function (line ~139), the value 1 is hardcoded, meaning only 1 WAL file is gathered at a time. The maxParallel configuration is never read in this function.
  • In contrast, the Restore function (line ~280) correctly uses maxParallel.

Additional Notes
The API documentation in internal/pgbackrest/api/config.go clearly states:

// Number of WAL files to be either archived in parallel (when the
// PostgreSQL instance is archiving to a backup object store) or
// restored in parallel (when a PostgreSQL standby is fetching WAL
// files from a recovery object store). If not specified, WAL files
// will be processed one at a time.
MaxParallel int `json:"maxParallel,omitempty"`

Is there any plan for addressing this issue ? We would greatly appreciate parallel WAL archiving support as it would significantly improve our backup performance during high-write workloads.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions