Skip to content

Support for minimum chunk size handling in streaming uploads (re: WHEN_REQUIRED) #6949

@vvo

Description

@vvo

Describe the feature

Since AWS SDK for JavaScript v3 version 3.729.0, streaming uploads to S3 require all chunks (except the last) to be at least 8192 bytes. This feature request proposes to add built-in chunk size enforcement to the SDK for streaming uploads, similar to what the lib-storage Upload utility likely does internally, but without requiring users to switch to the Upload abstraction.

Use Case

When using direct streaming uploads with S3, users who upgrade to AWS SDK for JavaScript v3 version 3.729.0+ encounter the error: InvalidChunkSizeError: Only the last chunk is allowed to have a size less than 8192 bytes.

A common scenario is when proxying HTTP requests directly to S3, such as:

// Example server endpoint handling file uploads
app.post('/upload', async (req, res) => {
  try {
    const result = await s3Client.send(new PutObjectCommand({
      Bucket: 'my-bucket',
      Key: 'my-file.txt',
      Body: req, // Using the request stream directly, instead of buffering the whole file
      ContentLength: Number(req.headers['content-length'])
    }));
    
    res.status(200).json({ success: true, ...result });
  } catch (error) {
    console.error('Upload failed:', error);
    res.status(500).json({ success: false, error: error.message });
  }
});

The current workarounds are either:

  • Disable checksums with requestChecksumCalculation: 'WHEN_REQUIRED' (losing data integrity benefits)
  • Switch to the Upload utility (requiring significant architectural changes)
  • Implement custom chunking logic (shown in proposed solution)

This issue affects many users as seen in issue #6810 and would benefit from a built-in solution.

Proposed Solution

A built-in chunking mechanism in the SDK that ensures all chunks meet the minimum size requirement. Here's our current workaround implementation that could serve as a reference:

import { Transform, type TransformCallback } from 'stream';
import type { IncomingMessage } from 'http';

export class MinimumChunkSizeStream extends Transform {
  private buffer: Buffer;
  private minChunkSize: number;

  constructor(minChunkSize: number = 8192) {
    super();
    this.buffer = Buffer.alloc(0);
    this.minChunkSize = minChunkSize;
  }

  _transform(
    chunk: Buffer,
    _encoding: BufferEncoding,
    callback: TransformCallback,
  ): void {
    // Add new data to our buffer
    this.buffer = Buffer.concat([this.buffer, chunk]);

    // While we have enough data, push complete chunks
    while (this.buffer.length >= this.minChunkSize) {
      const chunkToSend = this.buffer.subarray(0, this.minChunkSize);
      this.push(chunkToSend);
      this.buffer = this.buffer.subarray(this.minChunkSize);
    }

    callback();
  }

  _flush(callback: TransformCallback): void {
    // Push any remaining data as the final chunk (may be smaller than minChunkSize)
    if (this.buffer.length > 0) {
      this.push(this.buffer);
    }
    callback();
  }
}

/**
 * Helper function to wrap a stream for S3 uploads ensuring minimum chunk sizes
 */
export function minimumChunk(stream: IncomingMessage | NodeJS.ReadableStream, minChunkSize: number = 8192): Transform {
  const chunker = new MinimumChunkSizeStream(minChunkSize);
  stream.pipe(chunker);
  
  // Handle error propagation
  stream.on('error', (err: Error) => {
    chunker.emit('error', err);
  });
  
  return chunker;
}
import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3';
import express from 'express';
import { minimumChunk } from './chunking';

const app = express();
const s3Client = new S3Client({ region: 'us-east-1' });

app.post('/upload', async (req, res) => {
  try {
    const result = await s3Client.send(new PutObjectCommand({
      Bucket: 'my-bucket',
      Key: 'my-file.txt',
      Body: minimumChunk(req), // Simply wrap the request with the helper
      ContentLength: Number(req.headers['content-length'])
    }));
    
    res.status(200).json({ success: true, ...result });
  } catch (error) {
    console.error('Upload failed:', error);
    res.status(500).json({ success: false, error: error.message });
  }
});

Other Information

Related issue: #6810 - Announcement: S3 default integrity change
Users are reporting several error messages that could lead them here:

InvalidChunkSizeError: Only the last chunk is allowed to have a size less than 8192 bytes
An error was encountered in a non-retryable streaming request
Bad part size detected for partNumber X

It would be helpful to know if our chunking implementation is similar to what's done in the Upload utility and if this could be incorporated directly into the client for streaming uploads.

Acknowledgements

  • I may be able to implement this feature request
  • This feature might incur a breaking change

SDK version used

3.758.0

Environment details (OS name and version, etc.)

macos latest

Metadata

Metadata

Assignees

Labels

feature-requestNew feature or enhancement. May require GitHub community feedback.response-requestedWaiting on additional info and feedback. Will move to \"closing-soon\" in 7 days.

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions