Skip to content

GCS namespace bucket downloads fail #1831

@galrami

Description

@galrami

Environment info

  • NooBaa Version: noobaa/noobaa-core:master-20250911
  • Platform: AKS (Azure Kubernetes Service) V1.33.3

Actual behavior

  1. Uploads to a GCS-backed namespace bucket work — objects are written to GCS successfully
  2. Downloads from the same bucket fail — the request hangs then fails with ReadTimeoutError and ECONNRESET
  3. This happens even on a 47-byte file — not a size or memory issue
  4. To isolate whether the issue was network/credentials, we ran a Node.js script directly inside the noobaa-endpoint pod using NooBaa's own bundled GCS client (google_storage_wrap). The script
    called file.getMetadata() followed by file.createReadStream() on a test file in the GCS bucket backing the namespace store. Download succeeded.

Expected behavior

  1. Downloads from a GCS-backed namespace bucket should work the same as uploads

Steps to reproduce

  1. Create a NamespaceStore of type google-cloud-storage pointing to a GCS bucket using a service account JSON key
  2. Create a BucketClass and ObjectBucketClaim using that namespace store
  3. Upload a file via the NooBaa S3 endpoint:
    aws s3 cp test.txt s3://$BUCKET_NAME/test.txt --endpoint-url https://$BUCKET_HOST:$BUCKET_PORT --no-verify-ssl                                                                                       
    — succeeds                                                                                                                                                                                           
  4. Download the file:
    aws s3 cp s3://$BUCKET_NAME/test.txt /tmp/download.txt --endpoint-url https://$BUCKET_HOST:$BUCKET_PORT --no-verify-ssl
    — hangs then fails

More information - Screenshots / Logs / Other output

NooBaa endpoint logs:
NamespaceGCP.read_object_md: { bucket: 'my-gcs-bucket-...', key: 'test-upload.txt' }
NamespaceGCP.read_object_stream: { object_md: { size: '47', ... } }
...
request aborted /my-gcs-bucket-.../test-upload.txt
request error: Error: aborted
code: 'ECONNRESET'

Potential cause found in source code:

In src/sdk/namespace_gcp.js, the read_object_stream method:

const count_stream = stream_utils.get_tap_stream(data => {
this.stats_collector.update_namespace_write_stats({...});
count = 0;
});

We noticed three potential issues:

  1. this.stats_collector does not exist — the constructor sets this.stats, not this.stats_collector. This would cause a TypeError when the first data chunk flows through the transform stream.
  2. Calls update_namespace_write_stats instead of update_namespace_read_stats — this is a read operation.
  3. No optional chaining — namespace_s3.js correctly uses this.stats?.update_namespace_read_stats(...), the GCP version has no such guard.

Question: Are we possibly misconfiguring something on our end, or is this a known limitation? We couldn't find other reports of using NooBaa with GCS namespace stores — are there other users
successfully running this combination?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions