Skip to content

SDK | Upgrade AWS SDK to v3 - Namespace_S3 #9163

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

naveenpaul1
Copy link
Contributor

@naveenpaul1 naveenpaul1 commented Jul 29, 2025

Describe the Problem

Explain the Changes

  1. Upgrade AWS SDK to v3 - Namespace_S3
  2. Remove AWS SDK V2 cloud util changes

Issues: Fixed #xxx / Gap #xxx

Testing Instructions:

  1. Run namespaces3 flows
  • Doc added/updated
  • Tests added

Reference: https://mattsumme.rs/2025/aws-sdk-v3-put-object-stream-regression/
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/migrating/notable-changes/

Summary by CodeRabbit

  • Refactor
    • Migrated S3 integration to AWS SDK v3-style clients, improving streaming, multipart uploads, and request handling; introduced explicit top-level bucket handling and standardized Last-Modified usage and HTTP status validation.
  • Chores
    • Removed legacy STS v2 credential/client paths; consolidated on STS v3 credential flow.
  • Tests
    • Updated test suite to exercise SDK v3 flows and removed obsolete v2-specific tests.

Copy link

coderabbitai bot commented Jul 29, 2025

Walkthrough

Migrates S3 usage from AWS SDK v2 to v3 across NamespaceS3 and ObjectSDK, changes S3 response handling to use top-level fields, moves bucket into NamespaceS3 constructor, removes legacy AWS STS v2 helpers/tests in cloud_utils and tests, and updates requestHandler/credentials shape for v3 clients.

Changes

Cohort / File(s) Summary of changes
S3 response handling
src/endpoint/s3/s3_utils.js
Read LastModified from top-level response (r.LastModified); validate status via res.$metadata.httpStatusCode; return the top-level response object instead of nested $response.
NamespaceS3 (v3 migration)
src/sdk/namespace_s3.js
Migrate NamespaceS3 to AWS SDK v3: add top-level bucket ctor arg, accept S3ClientConfig-style s3_params, switch to async/await v3 calls for get/list/put/copy/multipart/tagging/ACL, adjust ETag/LastModified extraction and JSDoc types.
ObjectSDK S3 init (v3)
src/sdk/object_sdk.js
Build v3-style s3_params (nested credentials, region, forcePathStyle, requestHandler via noobaa_s3_client, requestChecksumCalculation); remove v2 httpOptions/agent/signatureVersion usage; pass bucket as top-level arg to NamespaceS3.
Cloud utils STS cleanup
src/util/cloud_utils.js
Remove legacy AWS STS v2 flow and exported helpers (createSTSS3Client, generate_aws_sts_creds); rely on STS v3 path (generate_aws_sdkv3_sts_creds, createSTSS3SDKv3Client) that returns plain credentials.
Tests: STS v2 removal
src/test/unit_tests/util_functions_tests/test_cloud_utils.js
Remove AWS SDK v2 STS tests/stubs and v2-only constants; retain tests validating SDK v3 STS flows.
Tests: minor whitespace
src/test/integration_tests/api/s3/test_s3_ops.js
Removed a blank line in mocha.before; no behavioral change.

Sequence Diagram(s)

sequenceDiagram
  participant App as ObjectSDK
  participant NS as NamespaceS3
  participant S3 as S3Client (v3)
  participant RH as RequestHandler

  Note over App: initialize
  App->>RH: get_requestHandler_with_suitable_agent(endpoint)
  App->>NS: new NamespaceS3({ namespace_resource_id, s3_params, bucket, stats })
  NS->>S3: Initialize S3 client with s3_params + requestHandler

  Note over App,NS: read object stream flow
  App->>NS: read_object_stream(key, range?)
  NS->>S3: GetObject (v3) / HeadObject (v3)
  S3-->>NS: top-level metadata (LastModified, ETag), Body (stream)
  NS->>App: Return Readable stream, update stats
Loading
sequenceDiagram
  participant Caller as Cloud Utils Consumer
  participant STS as STSClient (v3)
  participant CloudUtils as cloud_utils

  Caller->>CloudUtils: generate_aws_sdkv3_sts_creds(params)
  CloudUtils->>STS: AssumeRoleWithWebIdentity
  STS-->>CloudUtils: { accessKeyId, secretAccessKey, sessionToken }
  CloudUtils->>Caller: plain credentials object
  Caller->>Caller: create S3 client with credentials (via noobaa requestHandler)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested labels

size/XXL

Suggested reviewers

  • dannyzaken
  • jackyalbo
  • liranmauda
  • tangledbytes

Tip

🔌 Remote MCP (Model Context Protocol) integration is now available!

Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

CodeRabbit Commands (Invoked using PR/Issue comments)

Type @coderabbitai help to get the list of available commands.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Status, Documentation and Community

  • Visit our Status Page to check the current availability of CodeRabbit.
  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@naveenpaul1 naveenpaul1 marked this pull request as ready for review August 14, 2025 10:08
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🔭 Outside diff range comments (1)
src/sdk/namespace_s3.js (1)

126-132: Correct pagination field: use NextUploadIdMarker.

The response’s next marker is NextUploadIdMarker. Returning UploadIdMarker will regress pagination.

Apply this diff:

         return {
             objects: _.map(res.Uploads, obj => this._get_s3_object_info(obj, params.bucket)),
             common_prefixes: _.map(res.CommonPrefixes, 'Prefix'),
             is_truncated: res.IsTruncated,
             next_marker: res.NextKeyMarker,
-            next_upload_id_marker: res.UploadIdMarker,
+            next_upload_id_marker: res.NextUploadIdMarker,
         };
🧹 Nitpick comments (2)
src/endpoint/s3/s3_utils.js (1)

671-673: Align error message with v3 shape and consider normalizing the return type.

  • The message references "response header" while v3 exposes LastModified as a top-level field. Nit but confusing.
  • Optional: consider returning a numeric timestamp for consistency with other paths that use getTime().

Apply this diff for the message:

-    if (!r.LastModified) throw new Error("date not found in response header");
+    if (!r.LastModified) throw new Error("LastModified not found on response");
src/sdk/namespace_s3.js (1)

862-896: Outdated JSDoc types for SSE fields (v2 types).

These reference AWS SDK v2 types. Consider updating to v3 inputs for clarity:

  • ServerSideEncryption -> import("@aws-sdk/client-s3").ServerSideEncryption
  • SSEKMSKeyId -> string, etc.

I can open a quick PR to refresh these annotations if you want.

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these settings in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 704e34d and 9704f13.

📒 Files selected for processing (6)
  • src/endpoint/s3/s3_utils.js (1 hunks)
  • src/sdk/namespace_s3.js (32 hunks)
  • src/sdk/object_sdk.js (2 hunks)
  • src/test/integration_tests/api/s3/test_s3_ops.js (0 hunks)
  • src/test/unit_tests/util_functions_tests/test_cloud_utils.js (0 hunks)
  • src/util/cloud_utils.js (0 hunks)
💤 Files with no reviewable changes (3)
  • src/test/integration_tests/api/s3/test_s3_ops.js
  • src/test/unit_tests/util_functions_tests/test_cloud_utils.js
  • src/util/cloud_utils.js
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/sdk/namespace_s3.js (3)
src/sdk/object_sdk.js (3)
  • noobaa_s3_client (28-28)
  • require (27-27)
  • _ (4-4)
src/util/cloud_utils.js (5)
  • noobaa_s3_client (16-16)
  • require (9-9)
  • require (13-13)
  • require (14-14)
  • _ (11-11)
src/endpoint/s3/s3_utils.js (5)
  • err (421-421)
  • err (445-445)
  • err (538-538)
  • err (547-547)
  • _ (4-4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (1)
src/sdk/object_sdk.js (1)

28-29: LGTM: importing the v3-aware request handler.

This aligns construction with SDK v3 and allows per-endpoint agent selection. Good move.

Comment on lines +676 to 678
if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error");
return res;
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Broaden HTTP status acceptance to 2xx (v3 returns 206 for ranged GETs).

SDK v3 valid responses include 206 for ranged GET/HEAD; throwing on anything other than 200 will break callers.

Apply this diff:

-function get_http_response_from_resp(res) {
-    if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error");
-    return res;
-}
+function get_http_response_from_resp(res) {
+    const code = res?.$metadata?.httpStatusCode;
+    if (!code || Math.floor(code / 100) !== 2) {
+        throw new Error(`Response returned with error (status ${code})`);
+    }
+    return res;
+}
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error");
return res;
}
function get_http_response_from_resp(res) {
const code = res?.$metadata?.httpStatusCode;
if (!code || Math.floor(code / 100) !== 2) {
throw new Error(`Response returned with error (status ${code})`);
}
return res;
}
🤖 Prompt for AI Agents
In src/endpoint/s3/s3_utils.js around lines 676 to 678, the function currently
only accepts HTTP 200 and throws for other successful 2xx codes (breaking SDK v3
responses like 206 for ranged GETs); update the check to treat any 2xx status as
success (e.g., test if res.$metadata.httpStatusCode is >= 200 and < 300 or use
integer division to check the 2xx family) and only throw an Error when the
status is outside the 2xx range, optionally including the actual status code in
the thrown error message for clearer debugging.

Comment on lines 461 to 479
return new NamespaceS3({
namespace_resource_id: r.id,
s3_params: {
params: { Bucket: r.target_bucket },
endpoint: r.endpoint,
aws_sts_arn: r.aws_sts_arn,
accessKeyId: r.access_key.unwrap(),
secretAccessKey: r.secret_key.unwrap(),
// region: 'us-east-1', // TODO needed?
signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method),
s3ForcePathStyle: true,
// computeChecksums: false, // disabled by default for performance
httpOptions: { agent },
access_mode: r.access_mode
credentials: {
accessKeyId: r.access_key.unwrap(),
secretAccessKey: r.secret_key.unwrap(),
},
region: config.DEFAULT_REGION, // SDKv3 needs region
forcePathStyle: true,
requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint),
requestStreamBufferSize: config.IO_STREAM_SPLIT_SIZE,
computeChecksums: false, // disabled by default for performance
access_mode: r.access_mode,
//signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method)
},
bucket: r.target_bucket,
stats: this.stats,
});
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Verify SigV4-only assumption and auth-method compatibility.

You’ve commented out signatureVersion and defaulted to v3 SigV4. Some S3-compatible endpoints (older/legacy) may still require SigV2. Ensure all supported endpoints in your deployments accept SigV4, or reintroduce signature selection based on r.auth_method.

Run this script to surface any explicit V2 auth expectations and usages:

If we still have deployments requiring SigV2, we should gate signatureVersion off r.auth_method again for compatibility.


🏁 Script executed:

#!/bin/bash
# Find references to auth_method and signature usage across the repo
rg -n --no-heading -A 3 -B 3 $'auth_method|signatureVersion|s3ForcePathStyle|forcePathStyle|SigV2|SigV4|v2' | sed 's/^/>> /'

# Show all places where S3 namespaces for S3_COMPATIBLE/AWS/FLASHBLADE/IBM_COS are configured
ast-grep --pattern $'return new NamespaceS3({\n  $$$\n})' -A 10 -B 5

Length of output: 274658


Restore signatureVersion selection based on auth_method (avoid forcing SDKv3 / SigV4-only).

object_sdk.js currently leaves signatureVersion commented out which forces default SDKv3 behavior. The repo contains explicit AWS_V2 usage and helper code that relies on signatureVersion to pick v2 signing / SDKv2, so we should re-enable signature selection.

Files to check / update:

  • src/sdk/object_sdk.js — re-enable signatureVersion inside s3_params (current snippet has it commented).
  • src/util/cloud_utils.js — get_s3_endpoint_signature_ver(endpoint, auth_method) decides 'v2' vs 'v4'.
  • src/sdk/noobaa_s3_client/noobaa_s3_client.js — should_use_sdk_v2(params) uses signatureVersion === 'v2' to pick SDK v2.
  • Tests / schemas showing AWS_V2 usage: src/test/system_tests/sanity_build_test.js, src/server/system_services/schemas/*.js, plus many tests/examples.

Suggested change (small diff):

-                    //signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method)
+                    signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method)

Reason: reintroducing that line preserves compatibility with endpoints that require SigV2 by allowing noobaa_s3_client to choose SDK v2 when needed. Without it, connections marked AWS_V2 in the DB/tests may fail.

🤖 Prompt for AI Agents
In src/sdk/object_sdk.js around lines 461 to 481, the s3_params currently has
the signatureVersion line commented out which forces SDKv3 behavior and prevents
detection of endpoints that require SigV2; re-enable the signatureVersion
property by calling cloud_utils.get_s3_endpoint_signature_ver(r.endpoint,
r.auth_method) and set it on s3_params so noobaa_s3_client.can inspect it
(should_use_sdk_v2) and select SDK v2 when needed; ensure cloud_utils is
imported/available in this file and keep the value as returned (e.g., 'v2' or
'v4') so existing tests and schemas using AWS_V2 continue to work.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
src/sdk/object_sdk.js (1)

463-474: Restore signatureVersion selection to preserve SigV2 endpoints and SDK v2 fallback.

Without signatureVersion, we effectively force SDKv3/SigV4-only. Deployments/endpoints that still require SigV2 (e.g., resources configured with AWS_V2) will break because noobaa_s3_client.should_use_sdk_v2 won’t have a signal to choose SDK v2.

Apply this change:

                 s3_params: {
                     endpoint: r.endpoint,
                     aws_sts_arn: r.aws_sts_arn,
+                    signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method),
                     credentials: {
                         accessKeyId: r.access_key.unwrap(),
                         secretAccessKey: r.secret_key.unwrap(),
                     },
                     region: config.DEFAULT_REGION, // SDKv3 needs region
                     forcePathStyle: true,
                     requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint),
                     access_mode: r.access_mode,
                 },
🧹 Nitpick comments (2)
src/sdk/object_sdk.js (2)

28-28: Nit: Consider lazy-requiring noobaa_s3_client to avoid top-level cost/cycles.

object_sdk is a hot path. Requiring the S3 client wrapper at module load time can increase startup cost and risks a circular require with NamespaceS3. Suggest moving this require into the S3 branch of _setup_single_namespace (just before creating NamespaceS3) to defer it until needed.


470-473: Region default and request handler: add safe fallback and ensure handler reuse.

  • Region: Some S3-compatible endpoints expect us-east-1 (or ignore region). To reduce signature-mismatch risk when config is unset, prefer a safe fallback.
  • Handler: Ensure get_requestHandler_with_suitable_agent caches/returns a shared NodeHttpHandler per endpoint (keep-alive, maxSockets) and respects HTTP(S)_PROXY env vars to avoid socket churn and honor proxies.

Suggested tweak for region:

-                    region: config.DEFAULT_REGION, // SDKv3 needs region
+                    region: r.region || config.DEFAULT_REGION || 'us-east-1', // SDKv3 needs region; fall back safely
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 9704f13 and f5ec83c.

📒 Files selected for processing (2)
  • src/sdk/namespace_s3.js (32 hunks)
  • src/sdk/object_sdk.js (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/sdk/namespace_s3.js
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-jest-unit-tests
  • GitHub Check: run-package-lock-validation
🔇 Additional comments (2)
src/sdk/object_sdk.js (2)

466-469: Credentials nesting matches SDK v3 config expectations.

Moving access keys under credentials is correct for S3Client v3.


475-475: Passing bucket as a top-level arg is consistent with the new NamespaceS3 signature.

This aligns with the v3 refactor and simplifies s3_params.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (3)
src/endpoint/s3/s3_utils.js (1)

676-678: Broaden HTTP status acceptance to any 2xx (SDK v3 returns 206 for ranged GET/HEAD).

Only allowing 200 will break ranged operations and some HEADs under v3.

Apply this diff:

-function get_http_response_from_resp(res) {
-    if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error");
-    return res;
-}
+function get_http_response_from_resp(res) {
+    const code = res?.$metadata?.httpStatusCode;
+    if (!code || Math.floor(code / 100) !== 2) {
+        throw new Error(`Response returned with error (status ${code})`);
+    }
+    return res;
+}
src/sdk/namespace_s3.js (1)

480-489: Don’t stringify PartNumberMarker; pass number or undefined.

S3 expects a number. Stringifying is unnecessary and can be surprising.

-        const req = {
+        const req = {
             Bucket: this.bucket,
             Key: params.key,
             UploadId: params.obj_id,
             MaxParts: params.max,
-            PartNumberMarker: params.num_marker?.toString(),
+            PartNumberMarker: params.num_marker,
         };
src/sdk/object_sdk.js (1)

466-475: Restore SigV2 support in ObjectSDK S3 client parameters

You’ve removed the signatureVersion setting from src/sdk/object_sdk.js, which means all S3 clients will default to SigV4—even IBM_COS endpoints that require SigV2 will break. Re-add the signature selector based on r.auth_method:

• File: src/sdk/object_sdk.js
Location: inside the s3_params object for NamespaceS3 (around line 471)

Suggested patch:

                     forcePathStyle: true,
                     requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint),
                     requestChecksumCalculation: 'WHEN_REQUIRED',
+                    signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method),
                     access_mode: r.access_mode,

This restores the V2/V4 switch via cloud_utils.get_s3_endpoint_signature_ver. If SigV2 is truly deprecated across all pools, remove IBM_COS from config.DEFAULT_S3_AUTH_METHOD and update docs/tests to reflect a SigV4-only policy.

🧹 Nitpick comments (6)
src/endpoint/s3/s3_utils.js (1)

671-673: Clarify error and keep semantics consistent (LastModified).

The function now relies on LastModified, so the error message should reflect that. Callers currently expect a Date (not a string), which is fine.

-    if (!r.LastModified) throw new Error("date not found in response header");
+    if (!r.LastModified) throw new Error("LastModified not found in response");
     return r.LastModified;
src/sdk/namespace_s3.js (5)

32-43: Use v3 credentials shape when setting access key.

With v3 you’re passing credentials under s3_params.credentials, so this.access_key will be undefined and server-side copy checks will always fail.

-        this.access_key = s3_params.accessKeyId;
+        this.access_key = s3_params?.credentials?.accessKeyId || s3_params.accessKeyId;

249-256: Tighten request typing: only GetObjectRequest is used.

JSDoc union includes CopyObjectRequest which isn’t relevant to read path.

-        /** @type {import("@aws-sdk/client-s3").HeadObjectRequest & import("@aws-sdk/client-s3").GetObjectRequest | import("@aws-sdk/client-s3").CopyObjectRequest} */
+        /** @type {import("@aws-sdk/client-s3").HeadObjectRequest & import("@aws-sdk/client-s3").GetObjectRequest} */
         const request = {

265-266: Superfluous status guard (v3 throws on non-2xx).

The extra check is redundant and can hide null-safety issues; removing it simplifies the flow.

-            // In v3, non-2xx typically throws; keep this guard harmless.
-            if (obj_out.$metadata.httpStatusCode >= 300) throw new Error(`S3 getObject failed with status ${obj_out.$metadata.httpStatusCode}`);
+            // v3 throws on non-2xx; no additional guard needed

361-369: Head the created version and normalize last_modified_time.

Use the returned VersionId (if present) to head the exact created version. Also return epoch millis for consistency with other paths.

-        /** @type {import("@aws-sdk/client-s3").HeadObjectRequest} */
-        const request = {
-            Bucket: this.bucket,
-            Key: params.key,
-            VersionId: params.version_id,
-        };
-        const res_head = await this.s3.headObject(request);
-        const last_modified_time = await s3_utils.get_http_response_date(res_head);
+        /** @type {import("@aws-sdk/client-s3").HeadObjectRequest} */
+        const request = {
+            Bucket: this.bucket,
+            Key: params.key,
+            VersionId: res.VersionId || params.version_id,
+        };
+        const res_head = await this.s3.headObject(request);
+        const last_modified_time = s3_utils.get_http_response_date(res_head)?.getTime?.();

852-866: Broaden error translation for v3 error shapes (name/$metadata).

err.code is often undefined in v3. Use err.name and HTTP status to preserve mappings.

-    _translate_error_code(params, err) {
-        if (err.code === 'NoSuchKey') err.rpc_code = 'NO_SUCH_OBJECT';
-        else if (err.code === 'NotFound') err.rpc_code = 'NO_SUCH_OBJECT';
-        else if (err.code === 'InvalidRange') err.rpc_code = 'INVALID_RANGE';
-        else if (params.md_conditions) {
-            const md_conditions = params.md_conditions;
-            if (err.code === 'PreconditionFailed') {
-                if (md_conditions.if_match_etag) err.rpc_code = 'IF_MATCH_ETAG';
-                else if (md_conditions.if_unmodified_since) err.rpc_code = 'IF_UNMODIFIED_SINCE';
-            } else if (err.code === 'NotModified') {
-                if (md_conditions.if_modified_since) err.rpc_code = 'IF_MODIFIED_SINCE';
-                else if (md_conditions.if_none_match_etag) err.rpc_code = 'IF_NONE_MATCH_ETAG';
-            }
-        }
-    }
+    _translate_error_code(params, err) {
+        const code = err?.code || err?.name;
+        const http = err?.$metadata?.httpStatusCode;
+        if (code === 'NoSuchKey' || code === 'NotFound' || http === 404) {
+            err.rpc_code = 'NO_SUCH_OBJECT';
+        } else if (code === 'InvalidRange' || http === 416) {
+            err.rpc_code = 'INVALID_RANGE';
+        } else if (params.md_conditions) {
+            const md_conditions = params.md_conditions;
+            if (code === 'PreconditionFailed' || http === 412) {
+                if (md_conditions.if_match_etag) err.rpc_code = 'IF_MATCH_ETAG';
+                else if (md_conditions.if_unmodified_since) err.rpc_code = 'IF_UNMODIFIED_SINCE';
+            } else if (code === 'NotModified' || http === 304) {
+                if (md_conditions.if_modified_since) err.rpc_code = 'IF_MODIFIED_SINCE';
+                else if (md_conditions.if_none_match_etag) err.rpc_code = 'IF_NONE_MATCH_ETAG';
+            }
+        }
+    }
📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro

💡 Knowledge Base configuration:

  • MCP integration is disabled by default for public repositories
  • Jira integration is disabled by default for public repositories
  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between f5ec83c and 4b2e592.

📒 Files selected for processing (6)
  • src/endpoint/s3/s3_utils.js (1 hunks)
  • src/sdk/namespace_s3.js (32 hunks)
  • src/sdk/object_sdk.js (2 hunks)
  • src/test/integration_tests/api/s3/test_s3_ops.js (0 hunks)
  • src/test/unit_tests/util_functions_tests/test_cloud_utils.js (0 hunks)
  • src/util/cloud_utils.js (0 hunks)
💤 Files with no reviewable changes (3)
  • src/test/unit_tests/util_functions_tests/test_cloud_utils.js
  • src/util/cloud_utils.js
  • src/test/integration_tests/api/s3/test_s3_ops.js
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/sdk/object_sdk.js (5)
src/sdk/namespace_s3.js (2)
  • noobaa_s3_client (14-14)
  • config (8-8)
src/util/cloud_utils.js (1)
  • noobaa_s3_client (16-16)
src/server/system_services/account_server.js (1)
  • noobaa_s3_client (26-26)
src/server/bg_services/namespace_monitor.js (2)
  • noobaa_s3_client (15-15)
  • config (13-13)
src/sdk/bucketspace_s3.js (1)
  • noobaa_s3_client (6-6)
src/sdk/namespace_s3.js (2)
src/sdk/object_sdk.js (3)
  • noobaa_s3_client (28-28)
  • require (27-27)
  • _ (4-4)
src/util/cloud_utils.js (5)
  • noobaa_s3_client (16-16)
  • require (9-9)
  • require (13-13)
  • require (14-14)
  • _ (11-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build Noobaa Image
  • GitHub Check: run-package-lock-validation
  • GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/sdk/namespace_s3.js (2)

210-222: LGTM: v3-compatible InvalidRange handling.

Catching by err.name and checking $metadata.httpStatusCode === 416 covers v3 error shapes and keeps the fallback sane.


249-286: LGTM: streaming fix returns live Node readable and updates stats via tap.

This avoids pre-consuming the stream and uses v3’s Body as a Readable directly. Error propagation and rethrow are correct.

src/sdk/object_sdk.js (1)

466-475: LGTM: v3 client config wiring (credentials, region, forcePathStyle, requestHandler).

Solid move to v3 semantics; dynamic agent via requestHandler is appropriate.

Also applies to: 476-478

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant