-
Notifications
You must be signed in to change notification settings - Fork 87
SDK | Upgrade AWS SDK to v3 - Namespace_S3 #9163
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
WalkthroughMigrates S3 usage from AWS SDK v2 to v3 across NamespaceS3 and ObjectSDK, changes S3 response handling to use top-level fields, moves bucket into NamespaceS3 constructor, removes legacy AWS STS v2 helpers/tests in cloud_utils and tests, and updates requestHandler/credentials shape for v3 clients. Changes
Sequence Diagram(s)sequenceDiagram
participant App as ObjectSDK
participant NS as NamespaceS3
participant S3 as S3Client (v3)
participant RH as RequestHandler
Note over App: initialize
App->>RH: get_requestHandler_with_suitable_agent(endpoint)
App->>NS: new NamespaceS3({ namespace_resource_id, s3_params, bucket, stats })
NS->>S3: Initialize S3 client with s3_params + requestHandler
Note over App,NS: read object stream flow
App->>NS: read_object_stream(key, range?)
NS->>S3: GetObject (v3) / HeadObject (v3)
S3-->>NS: top-level metadata (LastModified, ETag), Body (stream)
NS->>App: Return Readable stream, update stats
sequenceDiagram
participant Caller as Cloud Utils Consumer
participant STS as STSClient (v3)
participant CloudUtils as cloud_utils
Caller->>CloudUtils: generate_aws_sdkv3_sts_creds(params)
CloudUtils->>STS: AssumeRoleWithWebIdentity
STS-->>CloudUtils: { accessKeyId, secretAccessKey, sessionToken }
CloudUtils->>Caller: plain credentials object
Caller->>Caller: create S3 client with credentials (via noobaa requestHandler)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Tip 🔌 Remote MCP (Model Context Protocol) integration is now available!Pro plan users can now connect to remote MCP servers from the Integrations page. Connect with popular remote MCPs such as Notion and Linear to add more context to your reviews and chats. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
Status, Documentation and Community
|
d72038c
to
9704f13
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 7
🔭 Outside diff range comments (1)
src/sdk/namespace_s3.js (1)
126-132
: Correct pagination field: use NextUploadIdMarker.The response’s next marker is
NextUploadIdMarker
. ReturningUploadIdMarker
will regress pagination.Apply this diff:
return { objects: _.map(res.Uploads, obj => this._get_s3_object_info(obj, params.bucket)), common_prefixes: _.map(res.CommonPrefixes, 'Prefix'), is_truncated: res.IsTruncated, next_marker: res.NextKeyMarker, - next_upload_id_marker: res.UploadIdMarker, + next_upload_id_marker: res.NextUploadIdMarker, };
🧹 Nitpick comments (2)
src/endpoint/s3/s3_utils.js (1)
671-673
: Align error message with v3 shape and consider normalizing the return type.
- The message references "response header" while v3 exposes LastModified as a top-level field. Nit but confusing.
- Optional: consider returning a numeric timestamp for consistency with other paths that use
getTime()
.Apply this diff for the message:
- if (!r.LastModified) throw new Error("date not found in response header"); + if (!r.LastModified) throw new Error("LastModified not found on response");src/sdk/namespace_s3.js (1)
862-896
: Outdated JSDoc types for SSE fields (v2 types).These reference AWS SDK v2 types. Consider updating to v3 inputs for clarity:
- ServerSideEncryption -> import("@aws-sdk/client-s3").ServerSideEncryption
- SSEKMSKeyId -> string, etc.
I can open a quick PR to refresh these annotations if you want.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these settings in your CodeRabbit configuration.
📒 Files selected for processing (6)
src/endpoint/s3/s3_utils.js
(1 hunks)src/sdk/namespace_s3.js
(32 hunks)src/sdk/object_sdk.js
(2 hunks)src/test/integration_tests/api/s3/test_s3_ops.js
(0 hunks)src/test/unit_tests/util_functions_tests/test_cloud_utils.js
(0 hunks)src/util/cloud_utils.js
(0 hunks)
💤 Files with no reviewable changes (3)
- src/test/integration_tests/api/s3/test_s3_ops.js
- src/test/unit_tests/util_functions_tests/test_cloud_utils.js
- src/util/cloud_utils.js
🧰 Additional context used
🧬 Code Graph Analysis (1)
src/sdk/namespace_s3.js (3)
src/sdk/object_sdk.js (3)
noobaa_s3_client
(28-28)require
(27-27)_
(4-4)src/util/cloud_utils.js (5)
noobaa_s3_client
(16-16)require
(9-9)require
(13-13)require
(14-14)_
(11-11)src/endpoint/s3/s3_utils.js (5)
err
(421-421)err
(445-445)err
(538-538)err
(547-547)_
(4-4)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: run-jest-unit-tests
🔇 Additional comments (1)
src/sdk/object_sdk.js (1)
28-29
: LGTM: importing the v3-aware request handler.This aligns construction with SDK v3 and allows per-endpoint agent selection. Good move.
if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error"); | ||
return res; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broaden HTTP status acceptance to 2xx (v3 returns 206 for ranged GETs).
SDK v3 valid responses include 206 for ranged GET/HEAD; throwing on anything other than 200 will break callers.
Apply this diff:
-function get_http_response_from_resp(res) {
- if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error");
- return res;
-}
+function get_http_response_from_resp(res) {
+ const code = res?.$metadata?.httpStatusCode;
+ if (!code || Math.floor(code / 100) !== 2) {
+ throw new Error(`Response returned with error (status ${code})`);
+ }
+ return res;
+}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error"); | |
return res; | |
} | |
function get_http_response_from_resp(res) { | |
const code = res?.$metadata?.httpStatusCode; | |
if (!code || Math.floor(code / 100) !== 2) { | |
throw new Error(`Response returned with error (status ${code})`); | |
} | |
return res; | |
} |
🤖 Prompt for AI Agents
In src/endpoint/s3/s3_utils.js around lines 676 to 678, the function currently
only accepts HTTP 200 and throws for other successful 2xx codes (breaking SDK v3
responses like 206 for ranged GETs); update the check to treat any 2xx status as
success (e.g., test if res.$metadata.httpStatusCode is >= 200 and < 300 or use
integer division to check the 2xx family) and only throw an Error when the
status is outside the 2xx range, optionally including the actual status code in
the thrown error message for clearer debugging.
src/sdk/object_sdk.js
Outdated
return new NamespaceS3({ | ||
namespace_resource_id: r.id, | ||
s3_params: { | ||
params: { Bucket: r.target_bucket }, | ||
endpoint: r.endpoint, | ||
aws_sts_arn: r.aws_sts_arn, | ||
accessKeyId: r.access_key.unwrap(), | ||
secretAccessKey: r.secret_key.unwrap(), | ||
// region: 'us-east-1', // TODO needed? | ||
signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method), | ||
s3ForcePathStyle: true, | ||
// computeChecksums: false, // disabled by default for performance | ||
httpOptions: { agent }, | ||
access_mode: r.access_mode | ||
credentials: { | ||
accessKeyId: r.access_key.unwrap(), | ||
secretAccessKey: r.secret_key.unwrap(), | ||
}, | ||
region: config.DEFAULT_REGION, // SDKv3 needs region | ||
forcePathStyle: true, | ||
requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint), | ||
requestStreamBufferSize: config.IO_STREAM_SPLIT_SIZE, | ||
computeChecksums: false, // disabled by default for performance | ||
access_mode: r.access_mode, | ||
//signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method) | ||
}, | ||
bucket: r.target_bucket, | ||
stats: this.stats, | ||
}); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify SigV4-only assumption and auth-method compatibility.
You’ve commented out signatureVersion
and defaulted to v3 SigV4. Some S3-compatible endpoints (older/legacy) may still require SigV2. Ensure all supported endpoints in your deployments accept SigV4, or reintroduce signature selection based on r.auth_method
.
Run this script to surface any explicit V2 auth expectations and usages:
If we still have deployments requiring SigV2, we should gate signatureVersion
off r.auth_method
again for compatibility.
🏁 Script executed:
#!/bin/bash
# Find references to auth_method and signature usage across the repo
rg -n --no-heading -A 3 -B 3 $'auth_method|signatureVersion|s3ForcePathStyle|forcePathStyle|SigV2|SigV4|v2' | sed 's/^/>> /'
# Show all places where S3 namespaces for S3_COMPATIBLE/AWS/FLASHBLADE/IBM_COS are configured
ast-grep --pattern $'return new NamespaceS3({\n $$$\n})' -A 10 -B 5
Length of output: 274658
Restore signatureVersion selection based on auth_method (avoid forcing SDKv3 / SigV4-only).
object_sdk.js currently leaves signatureVersion commented out which forces default SDKv3 behavior. The repo contains explicit AWS_V2 usage and helper code that relies on signatureVersion to pick v2 signing / SDKv2, so we should re-enable signature selection.
Files to check / update:
- src/sdk/object_sdk.js — re-enable signatureVersion inside s3_params (current snippet has it commented).
- src/util/cloud_utils.js — get_s3_endpoint_signature_ver(endpoint, auth_method) decides 'v2' vs 'v4'.
- src/sdk/noobaa_s3_client/noobaa_s3_client.js — should_use_sdk_v2(params) uses signatureVersion === 'v2' to pick SDK v2.
- Tests / schemas showing AWS_V2 usage: src/test/system_tests/sanity_build_test.js, src/server/system_services/schemas/*.js, plus many tests/examples.
Suggested change (small diff):
- //signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method)
+ signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method)
Reason: reintroducing that line preserves compatibility with endpoints that require SigV2 by allowing noobaa_s3_client to choose SDK v2 when needed. Without it, connections marked AWS_V2 in the DB/tests may fail.
🤖 Prompt for AI Agents
In src/sdk/object_sdk.js around lines 461 to 481, the s3_params currently has
the signatureVersion line commented out which forces SDKv3 behavior and prevents
detection of endpoints that require SigV2; re-enable the signatureVersion
property by calling cloud_utils.get_s3_endpoint_signature_ver(r.endpoint,
r.auth_method) and set it on s3_params so noobaa_s3_client.can inspect it
(should_use_sdk_v2) and select SDK v2 when needed; ensure cloud_utils is
imported/available in this file and keep the value as returned (e.g., 'v2' or
'v4') so existing tests and schemas using AWS_V2 continue to work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
src/sdk/object_sdk.js (1)
463-474
: Restore signatureVersion selection to preserve SigV2 endpoints and SDK v2 fallback.Without
signatureVersion
, we effectively force SDKv3/SigV4-only. Deployments/endpoints that still require SigV2 (e.g., resources configured with AWS_V2) will break becausenoobaa_s3_client.should_use_sdk_v2
won’t have a signal to choose SDK v2.Apply this change:
s3_params: { endpoint: r.endpoint, aws_sts_arn: r.aws_sts_arn, + signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method), credentials: { accessKeyId: r.access_key.unwrap(), secretAccessKey: r.secret_key.unwrap(), }, region: config.DEFAULT_REGION, // SDKv3 needs region forcePathStyle: true, requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint), access_mode: r.access_mode, },
🧹 Nitpick comments (2)
src/sdk/object_sdk.js (2)
28-28
: Nit: Consider lazy-requiring noobaa_s3_client to avoid top-level cost/cycles.object_sdk is a hot path. Requiring the S3 client wrapper at module load time can increase startup cost and risks a circular require with NamespaceS3. Suggest moving this require into the S3 branch of
_setup_single_namespace
(just before creating NamespaceS3) to defer it until needed.
470-473
: Region default and request handler: add safe fallback and ensure handler reuse.
- Region: Some S3-compatible endpoints expect us-east-1 (or ignore region). To reduce signature-mismatch risk when config is unset, prefer a safe fallback.
- Handler: Ensure
get_requestHandler_with_suitable_agent
caches/returns a shared NodeHttpHandler per endpoint (keep-alive, maxSockets) and respects HTTP(S)_PROXY env vars to avoid socket churn and honor proxies.Suggested tweak for region:
- region: config.DEFAULT_REGION, // SDKv3 needs region + region: r.region || config.DEFAULT_REGION || 'us-east-1', // SDKv3 needs region; fall back safely
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
src/sdk/namespace_s3.js
(32 hunks)src/sdk/object_sdk.js
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- src/sdk/namespace_s3.js
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Noobaa Image
- GitHub Check: run-jest-unit-tests
- GitHub Check: run-package-lock-validation
🔇 Additional comments (2)
src/sdk/object_sdk.js (2)
466-469
: Credentials nesting matches SDK v3 config expectations.Moving access keys under
credentials
is correct for S3Client v3.
475-475
: Passing bucket as a top-level arg is consistent with the new NamespaceS3 signature.This aligns with the v3 refactor and simplifies
s3_params
.
Signed-off-by: Naveen Paul <[email protected]>
f5ec83c
to
4b2e592
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
src/endpoint/s3/s3_utils.js (1)
676-678
: Broaden HTTP status acceptance to any 2xx (SDK v3 returns 206 for ranged GET/HEAD).Only allowing 200 will break ranged operations and some HEADs under v3.
Apply this diff:
-function get_http_response_from_resp(res) { - if (res.$metadata.httpStatusCode !== 200) throw new Error("Response return with error"); - return res; -} +function get_http_response_from_resp(res) { + const code = res?.$metadata?.httpStatusCode; + if (!code || Math.floor(code / 100) !== 2) { + throw new Error(`Response returned with error (status ${code})`); + } + return res; +}src/sdk/namespace_s3.js (1)
480-489
: Don’t stringify PartNumberMarker; pass number or undefined.S3 expects a number. Stringifying is unnecessary and can be surprising.
- const req = { + const req = { Bucket: this.bucket, Key: params.key, UploadId: params.obj_id, MaxParts: params.max, - PartNumberMarker: params.num_marker?.toString(), + PartNumberMarker: params.num_marker, };src/sdk/object_sdk.js (1)
466-475
: Restore SigV2 support in ObjectSDK S3 client parametersYou’ve removed the
signatureVersion
setting fromsrc/sdk/object_sdk.js
, which means all S3 clients will default to SigV4—even IBM_COS endpoints that require SigV2 will break. Re-add the signature selector based onr.auth_method
:• File:
src/sdk/object_sdk.js
Location: inside thes3_params
object forNamespaceS3
(around line 471)Suggested patch:
forcePathStyle: true, requestHandler: noobaa_s3_client.get_requestHandler_with_suitable_agent(r.endpoint), requestChecksumCalculation: 'WHEN_REQUIRED', + signatureVersion: cloud_utils.get_s3_endpoint_signature_ver(r.endpoint, r.auth_method), access_mode: r.access_mode,
This restores the V2/V4 switch via
cloud_utils.get_s3_endpoint_signature_ver
. If SigV2 is truly deprecated across all pools, remove IBM_COS fromconfig.DEFAULT_S3_AUTH_METHOD
and update docs/tests to reflect a SigV4-only policy.
🧹 Nitpick comments (6)
src/endpoint/s3/s3_utils.js (1)
671-673
: Clarify error and keep semantics consistent (LastModified).The function now relies on LastModified, so the error message should reflect that. Callers currently expect a Date (not a string), which is fine.
- if (!r.LastModified) throw new Error("date not found in response header"); + if (!r.LastModified) throw new Error("LastModified not found in response"); return r.LastModified;src/sdk/namespace_s3.js (5)
32-43
: Use v3 credentials shape when setting access key.With v3 you’re passing credentials under s3_params.credentials, so this.access_key will be undefined and server-side copy checks will always fail.
- this.access_key = s3_params.accessKeyId; + this.access_key = s3_params?.credentials?.accessKeyId || s3_params.accessKeyId;
249-256
: Tighten request typing: only GetObjectRequest is used.JSDoc union includes CopyObjectRequest which isn’t relevant to read path.
- /** @type {import("@aws-sdk/client-s3").HeadObjectRequest & import("@aws-sdk/client-s3").GetObjectRequest | import("@aws-sdk/client-s3").CopyObjectRequest} */ + /** @type {import("@aws-sdk/client-s3").HeadObjectRequest & import("@aws-sdk/client-s3").GetObjectRequest} */ const request = {
265-266
: Superfluous status guard (v3 throws on non-2xx).The extra check is redundant and can hide null-safety issues; removing it simplifies the flow.
- // In v3, non-2xx typically throws; keep this guard harmless. - if (obj_out.$metadata.httpStatusCode >= 300) throw new Error(`S3 getObject failed with status ${obj_out.$metadata.httpStatusCode}`); + // v3 throws on non-2xx; no additional guard needed
361-369
: Head the created version and normalize last_modified_time.Use the returned VersionId (if present) to head the exact created version. Also return epoch millis for consistency with other paths.
- /** @type {import("@aws-sdk/client-s3").HeadObjectRequest} */ - const request = { - Bucket: this.bucket, - Key: params.key, - VersionId: params.version_id, - }; - const res_head = await this.s3.headObject(request); - const last_modified_time = await s3_utils.get_http_response_date(res_head); + /** @type {import("@aws-sdk/client-s3").HeadObjectRequest} */ + const request = { + Bucket: this.bucket, + Key: params.key, + VersionId: res.VersionId || params.version_id, + }; + const res_head = await this.s3.headObject(request); + const last_modified_time = s3_utils.get_http_response_date(res_head)?.getTime?.();
852-866
: Broaden error translation for v3 error shapes (name/$metadata).err.code is often undefined in v3. Use err.name and HTTP status to preserve mappings.
- _translate_error_code(params, err) { - if (err.code === 'NoSuchKey') err.rpc_code = 'NO_SUCH_OBJECT'; - else if (err.code === 'NotFound') err.rpc_code = 'NO_SUCH_OBJECT'; - else if (err.code === 'InvalidRange') err.rpc_code = 'INVALID_RANGE'; - else if (params.md_conditions) { - const md_conditions = params.md_conditions; - if (err.code === 'PreconditionFailed') { - if (md_conditions.if_match_etag) err.rpc_code = 'IF_MATCH_ETAG'; - else if (md_conditions.if_unmodified_since) err.rpc_code = 'IF_UNMODIFIED_SINCE'; - } else if (err.code === 'NotModified') { - if (md_conditions.if_modified_since) err.rpc_code = 'IF_MODIFIED_SINCE'; - else if (md_conditions.if_none_match_etag) err.rpc_code = 'IF_NONE_MATCH_ETAG'; - } - } - } + _translate_error_code(params, err) { + const code = err?.code || err?.name; + const http = err?.$metadata?.httpStatusCode; + if (code === 'NoSuchKey' || code === 'NotFound' || http === 404) { + err.rpc_code = 'NO_SUCH_OBJECT'; + } else if (code === 'InvalidRange' || http === 416) { + err.rpc_code = 'INVALID_RANGE'; + } else if (params.md_conditions) { + const md_conditions = params.md_conditions; + if (code === 'PreconditionFailed' || http === 412) { + if (md_conditions.if_match_etag) err.rpc_code = 'IF_MATCH_ETAG'; + else if (md_conditions.if_unmodified_since) err.rpc_code = 'IF_UNMODIFIED_SINCE'; + } else if (code === 'NotModified' || http === 304) { + if (md_conditions.if_modified_since) err.rpc_code = 'IF_MODIFIED_SINCE'; + else if (md_conditions.if_none_match_etag) err.rpc_code = 'IF_NONE_MATCH_ETAG'; + } + } + }
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
src/endpoint/s3/s3_utils.js
(1 hunks)src/sdk/namespace_s3.js
(32 hunks)src/sdk/object_sdk.js
(2 hunks)src/test/integration_tests/api/s3/test_s3_ops.js
(0 hunks)src/test/unit_tests/util_functions_tests/test_cloud_utils.js
(0 hunks)src/util/cloud_utils.js
(0 hunks)
💤 Files with no reviewable changes (3)
- src/test/unit_tests/util_functions_tests/test_cloud_utils.js
- src/util/cloud_utils.js
- src/test/integration_tests/api/s3/test_s3_ops.js
🧰 Additional context used
🧬 Code Graph Analysis (2)
src/sdk/object_sdk.js (5)
src/sdk/namespace_s3.js (2)
noobaa_s3_client
(14-14)config
(8-8)src/util/cloud_utils.js (1)
noobaa_s3_client
(16-16)src/server/system_services/account_server.js (1)
noobaa_s3_client
(26-26)src/server/bg_services/namespace_monitor.js (2)
noobaa_s3_client
(15-15)config
(13-13)src/sdk/bucketspace_s3.js (1)
noobaa_s3_client
(6-6)
src/sdk/namespace_s3.js (2)
src/sdk/object_sdk.js (3)
noobaa_s3_client
(28-28)require
(27-27)_
(4-4)src/util/cloud_utils.js (5)
noobaa_s3_client
(16-16)require
(9-9)require
(13-13)require
(14-14)_
(11-11)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build Noobaa Image
- GitHub Check: run-package-lock-validation
- GitHub Check: run-jest-unit-tests
🔇 Additional comments (3)
src/sdk/namespace_s3.js (2)
210-222
: LGTM: v3-compatible InvalidRange handling.Catching by err.name and checking $metadata.httpStatusCode === 416 covers v3 error shapes and keeps the fallback sane.
249-286
: LGTM: streaming fix returns live Node readable and updates stats via tap.This avoids pre-consuming the stream and uses v3’s Body as a Readable directly. Error propagation and rethrow are correct.
src/sdk/object_sdk.js (1)
466-475
: LGTM: v3 client config wiring (credentials, region, forcePathStyle, requestHandler).Solid move to v3 semantics; dynamic agent via requestHandler is appropriate.
Also applies to: 476-478
Describe the Problem
Explain the Changes
Issues: Fixed #xxx / Gap #xxx
Testing Instructions:
Reference: https://mattsumme.rs/2025/aws-sdk-v3-put-object-stream-regression/
https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/migrating/notable-changes/
Summary by CodeRabbit