Skip to content

Conversation

muqsitnawaz
Copy link

@muqsitnawaz muqsitnawaz commented Jul 15, 2025

What kind of change does this PR introduce?

support for s3 and minio for storage. you can configure it using the STORAGE_PROVIDER env variable

Why was this change needed?

#322

Other information:

no

Checklist:

Put a "X" in the boxes below to indicate you have followed the checklist;

  • I have read the CONTRIBUTING guide.
  • I checked that there were not similar issues or PRs already open for this.
  • This PR fixes just ONE issue (do not include multiple issues or types of change in the same PR) For example, don't try and fix a UI issue and include new dependencies in the same PR.

Summary by CodeRabbit

  • New Features

    • Added support for S3-compatible storage providers, including AWS S3 and MinIO, for file uploads and deletions.
    • Updated environment variable documentation to include new configuration options for S3 and MinIO storage.
  • Documentation

    • Expanded and clarified environment variable descriptions to reflect new storage provider options.

Copy link

vercel bot commented Jul 15, 2025

@muqsitnawaz is attempting to deploy a commit to the Listinai Team on Vercel.

A member of the Team first needs to authorize it.

Copy link

coderabbitai bot commented Jul 15, 2025

Walkthrough

Support for S3-compatible storage providers has been added, including AWS S3 and MinIO. New classes for S3 and MinIO storage were introduced, and the upload factory was updated to instantiate these providers based on environment variables. The .env.example file was updated to document the new configuration options.

Changes

File(s) Change Summary
.env.example Added S3-compatible storage configuration variables and updated STORAGE_PROVIDER description.
libraries/nestjs-libraries/src/upload/s3.storage.ts Added new S3Storage class implementing file upload/delete to S3-compatible services.
libraries/nestjs-libraries/src/upload/minio.storage.ts Added new MinIOStorage class implementing file upload/delete to MinIO using S3 SDK.
libraries/nestjs-libraries/src/upload/upload.factory.ts Updated factory to support "s3" and "minio" storage providers using new classes and environment variables.

Sequence Diagram(s)

sequenceDiagram
    participant Client
    participant UploadFactory
    participant S3Storage
    participant MinIOStorage

    Client->>UploadFactory: createStorage("s3" or "minio")
    alt "s3" selected
        UploadFactory->>S3Storage: new S3Storage(env vars)
        UploadFactory-->>Client: S3Storage instance
    else "minio" selected
        UploadFactory->>MinIOStorage: new MinIOStorage(env vars)
        UploadFactory-->>Client: MinIOStorage instance
    end

    Client->>S3Storage: uploadFile(file) / uploadSimple(url) / removeFile(path)
    Client->>MinIOStorage: uploadFile(file) / uploadSimple(url) / removeFile(path)
Loading

Poem

A hop, a skip, a storage leap,
Now S3 and MinIO secrets we keep!
With buckets and keys, our files take flight,
Through clouds and endpoints, day or night.
The factory now knows which way to go—
🐇 Uploads are faster, just so you know!

Warning

There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure.

🔧 ESLint

If the error stems from missing dependencies, add them to the package.json file. For unrecoverable errors (e.g., due to private dependencies), disable the tool in the CodeRabbit configuration.

libraries/nestjs-libraries/src/upload/minio.storage.ts

Oops! Something went wrong! :(

ESLint: 8.57.0

Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)

libraries/nestjs-libraries/src/upload/s3.storage.ts

Oops! Something went wrong! :(

ESLint: 8.57.0

Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)

libraries/nestjs-libraries/src/upload/upload.factory.ts

Oops! Something went wrong! :(

ESLint: 8.57.0

Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs
at Object.getPackageJSONURL (node:internal/modules/package_json_reader:255:9)
at packageResolve (node:internal/modules/esm/resolve:767:81)
at moduleResolve (node:internal/modules/esm/resolve:853:18)
at defaultResolve (node:internal/modules/esm/resolve:983:11)
at ModuleLoader.defaultResolve (node:internal/modules/esm/loader:801:12)
at #cachedDefaultResolve (node:internal/modules/esm/loader:725:25)
at ModuleLoader.resolve (node:internal/modules/esm/loader:708:38)
at ModuleLoader.getModuleJobForImport (node:internal/modules/esm/loader:309:38)
at #link (node:internal/modules/esm/module_job:202:49)

✨ Finishing Touches
  • 📝 Generate Docstrings

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Explain this complex logic.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai explain this code block.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and explain its main purpose.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Support

Need help? Create a ticket on our support page for assistance with any issues or questions.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai generate docstrings to generate docstrings for this PR.
  • @coderabbitai generate sequence diagram to generate a sequence diagram of the changes in this PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

CodeRabbit Configuration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (4)
libraries/nestjs-libraries/src/upload/minio.storage.ts (4)

1-7: Same import issues as S3Storage

This file has identical import issues as the S3Storage class. Please apply the same fixes mentioned in the S3Storage review.


35-55: Apply same error handling improvements as S3Storage

This method has identical issues as the S3Storage implementation. Please apply the same error handling and validation improvements mentioned in the S3Storage review.


57-84: Apply same error handling improvements as S3Storage

This method has identical issues as the S3Storage implementation. Please apply the same error handling and validation improvements mentioned in the S3Storage review.


86-95: Apply same file path parsing improvements as S3Storage

This method has identical issues as the S3Storage implementation. Please apply the same robustness improvements mentioned in the S3Storage review.

🧹 Nitpick comments (1)
libraries/nestjs-libraries/src/upload/minio.storage.ts (1)

9-96: Consider refactoring to reduce code duplication

There's significant code duplication between S3Storage and MinIOStorage classes. Consider extracting common functionality into a base class or shared utilities.

// Create a base class for S3-compatible storage
abstract class S3CompatibleStorage implements IUploadProvider {
  protected _client: S3Client;
  protected _bucketName: string;

  constructor(
    accessKeyId: string,
    secretAccessKey: string,
    region: string,
    bucketName: string,
    endpoint?: string,
    options?: { forcePathStyle?: boolean }
  ) {
    this._bucketName = bucketName;
    this._client = new S3Client({
      endpoint,
      region,
      credentials: { accessKeyId, secretAccessKey },
      ...options,
    });
  }

  protected abstract getUploadUrl(fileName: string): string;

  // Common implementations for uploadSimple, uploadFile, removeFile
  // ...
}

// Then extend for specific implementations
class S3Storage extends S3CompatibleStorage {
  constructor(accessKeyId: string, secretAccessKey: string, region: string, bucketName: string, endpoint?: string) {
    super(accessKeyId, secretAccessKey, region, bucketName, endpoint);
  }

  protected getUploadUrl(fileName: string): string {
    // S3-specific URL logic
  }
}

class MinIOStorage extends S3CompatibleStorage {
  constructor(accessKeyId: string, secretAccessKey: string, region: string, bucketName: string, endpoint: string) {
    super(accessKeyId, secretAccessKey, region, bucketName, endpoint, { forcePathStyle: true });
  }

  protected getUploadUrl(fileName: string): string {
    // MinIO-specific URL logic
  }
}
📜 Review details

Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between eed8c0d and 52b0d53.

📒 Files selected for processing (4)
  • .env.example (2 hunks)
  • libraries/nestjs-libraries/src/upload/minio.storage.ts (1 hunks)
  • libraries/nestjs-libraries/src/upload/s3.storage.ts (1 hunks)
  • libraries/nestjs-libraries/src/upload/upload.factory.ts (2 hunks)
🧰 Additional context used
🧠 Learnings (1)
.env.example (2)
Learnt from: DrummyFloyd
PR: gitroomhq/postiz-app#550
File: .env.example:92-93
Timestamp: 2025-02-03T09:04:59.568Z
Learning: In `.env.example`, when providing specific provider examples (like Authentik for OIDC), they should be commented out while keeping generic defaults uncommented to better reflect the application's provider-agnostic nature.
Learnt from: DrummyFloyd
PR: gitroomhq/postiz-app#550
File: .env.example:94-98
Timestamp: 2025-02-03T09:06:46.362Z
Learning: The environment variables POSTIZ_OAUTH_* in .env.example use Authentik-specific paths (/application/o/*) as examples, as they represent actual paths used in Authentik integration.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
  • GitHub Check: build-and-publish
🔇 Additional comments (6)
libraries/nestjs-libraries/src/upload/upload.factory.ts (1)

4-5: LGTM: Clean import additions

The imports for the new storage providers are properly added and follow the existing pattern.

.env.example (2)

23-31: LGTM: Well-structured S3 configuration section

The S3 configuration variables are properly documented and commented out, following the established pattern for provider-specific configurations. The comments clearly explain the purpose and usage.


42-42: LGTM: Updated storage provider options

The comment accurately reflects the new storage provider option while maintaining the existing documentation style.

libraries/nestjs-libraries/src/upload/s3.storage.ts (1)

29-37: LGTM: Well-implemented URL generation with proper endpoint handling

The getUploadUrl method correctly handles both custom endpoints and standard AWS S3 URLs, providing good flexibility for different S3-compatible services.

libraries/nestjs-libraries/src/upload/minio.storage.ts (2)

12-28: LGTM: Proper MinIO-specific configuration

The constructor correctly configures the S3 client for MinIO with forcePathStyle: true and makes the endpoint required, which is appropriate for MinIO usage.


30-33: LGTM: Correct MinIO URL format

The URL generation correctly uses path-style addressing which is required for MinIO.

Comment on lines +23 to +38
case 's3':
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider validating environment variables and handling inconsistent endpoint requirements

The implementation looks good, but there are a few concerns:

  1. Environment variable validation: The ! assertions assume these variables are always defined, but no validation is performed
  2. Inconsistent endpoint handling: S3_ENDPOINT is optional while MINIO_ENDPOINT is required, which might confuse users

Consider adding validation and making the endpoint handling consistent:

      case 's3':
+        if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) {
+          throw new Error('Missing required S3 environment variables');
+        }
        return new S3Storage(
          process.env.S3_ACCESS_KEY_ID!,
          process.env.S3_SECRET_ACCESS_KEY!,
          process.env.S3_REGION!,
          process.env.S3_BUCKET_NAME!,
          process.env.S3_ENDPOINT
        );
      case 'minio':
+        if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) {
+          throw new Error('Missing required MinIO environment variables');
+        }
        return new MinIOStorage(
          process.env.MINIO_ACCESS_KEY!,
          process.env.MINIO_SECRET_KEY!,
          process.env.MINIO_REGION!,
          process.env.MINIO_BUCKET_NAME!,
          process.env.MINIO_ENDPOINT!
        );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case 's3':
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
case 's3':
if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) {
throw new Error('Missing required S3 environment variables');
}
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) {
throw new Error('Missing required MinIO environment variables');
}
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/upload.factory.ts around lines 23 to
38, the environment variables are accessed with non-null assertions without
validation, and the endpoint parameters are inconsistently handled between S3
(optional) and MinIO (required). Fix this by adding explicit checks to validate
that all required environment variables are defined before use, throwing clear
errors if any are missing. Also, standardize the endpoint parameter handling by
either making it optional or required for both storage types to avoid confusion.

Comment on lines +61 to +88
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype) || '';

const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});

await this._client.send(command);

const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and improve type safety

Similar issues as uploadSimple method:

  1. Missing error handling for S3 upload failures
  2. Potential empty extension from mime.extension() call
  async uploadFile(file: Express.Multer.File): Promise<any> {
    const id = makeId(10);
-    const extension = mime.extension(file.mimetype) || '';
+    const extension = mime.extension(file.mimetype);
+    if (!extension) {
+      throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`);
+    }

    const command = new PutObjectCommand({
      Bucket: this._bucketName,
      Key: `${id}.${extension}`,
      Body: file.buffer,
      ContentType: file.mimetype,
      ACL: 'public-read',
    });

-    await this._client.send(command);
+    await this._client.send(command).catch(err => {
+      throw new Error(`Failed to upload file to S3: ${err.message}`);
+    });

    const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
    return {
      filename: `${id}.${extension}`,
      mimetype: file.mimetype,
      size: file.size,
      buffer: file.buffer,
      originalname: `${id}.${extension}`,
      fieldname: 'file',
      path: uploadUrl,
      destination: uploadUrl,
      encoding: '7bit',
      stream: file.buffer as any,
    };
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype) || '';
const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});
await this._client.send(command);
const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype);
if (!extension) {
throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`);
}
const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});
await this._client.send(command).catch(err => {
throw new Error(`Failed to upload file to S3: ${err.message}`);
});
const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 61 and 88,
the uploadFile method lacks error handling for potential S3 upload failures and
does not handle cases where mime.extension(file.mimetype) returns an empty
string. To fix this, wrap the S3 upload logic in a try-catch block to catch and
handle errors gracefully, and add a fallback or validation to ensure the file
extension is never empty before constructing the S3 object key.

Comment on lines +39 to +59
async uploadSimple(path: string) {
const loadImage = await fetch(path);
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType)!;
const id = makeId(10);

const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};

const command = new PutObjectCommand({ ...params });
await this._client.send(command);

return this.getUploadUrl(`${id}.${extension}`);
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve error handling and content type validation

The method has potential robustness issues:

  1. Unsafe extension extraction: Line 44 uses ! assertion on getExtension which could throw if contentType is invalid
  2. Missing error handling: No error handling for fetch failures or S3 upload failures
  3. ACL verification needed: Using 'public-read' ACL - verify this is intentional for security
  async uploadSimple(path: string) {
-    const loadImage = await fetch(path);
+    const loadImage = await fetch(path).catch(err => {
+      throw new Error(`Failed to fetch file from ${path}: ${err.message}`);
+    });
    const contentType =
      loadImage?.headers?.get('content-type') ||
      loadImage?.headers?.get('Content-Type');
-    const extension = getExtension(contentType)!;
+    const extension = getExtension(contentType);
+    if (!extension) {
+      throw new Error(`Unable to determine file extension for content type: ${contentType}`);
+    }
    const id = makeId(10);

    const params = {
      Bucket: this._bucketName,
      Key: `${id}.${extension}`,
      Body: Buffer.from(await loadImage.arrayBuffer()),
      ContentType: contentType,
      ACL: 'public-read',
    };

    const command = new PutObjectCommand({ ...params });
-    await this._client.send(command);
+    await this._client.send(command).catch(err => {
+      throw new Error(`Failed to upload file to S3: ${err.message}`);
+    });

    return this.getUploadUrl(`${id}.${extension}`);
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async uploadSimple(path: string) {
const loadImage = await fetch(path);
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType)!;
const id = makeId(10);
const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};
const command = new PutObjectCommand({ ...params });
await this._client.send(command);
return this.getUploadUrl(`${id}.${extension}`);
}
async uploadSimple(path: string) {
const loadImage = await fetch(path).catch(err => {
throw new Error(`Failed to fetch file from ${path}: ${err.message}`);
});
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType);
if (!extension) {
throw new Error(`Unable to determine file extension for content type: ${contentType}`);
}
const id = makeId(10);
const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};
const command = new PutObjectCommand({ ...params });
await this._client.send(command).catch(err => {
throw new Error(`Failed to upload file to S3: ${err.message}`);
});
return this.getUploadUrl(`${id}.${extension}`);
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts lines 39 to 59, improve
robustness by first validating that contentType is not null or undefined before
calling getExtension, and handle the case where getExtension returns undefined
to avoid runtime errors. Add try-catch blocks around the fetch call and the S3
upload command to catch and handle possible failures gracefully. Finally, review
the use of 'public-read' ACL to confirm it aligns with security requirements,
and if necessary, make it configurable or document the security implications.

Comment on lines +90 to +99
async removeFile(filePath: string): Promise<void> {
const fileName = filePath.split('/').pop();
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command);
}
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve file path parsing robustness

The current implementation of extracting filename from path is fragile and could fail with unexpected path formats.

  async removeFile(filePath: string): Promise<void> {
-    const fileName = filePath.split('/').pop();
+    // Extract filename from URL path, handling both full URLs and relative paths
+    let fileName: string;
+    try {
+      const url = new URL(filePath);
+      fileName = url.pathname.split('/').pop() || '';
+    } catch {
+      // If not a valid URL, treat as relative path
+      fileName = filePath.split('/').pop() || '';
+    }
+    
    if (fileName) {
      const command = new DeleteObjectCommand({
        Bucket: this._bucketName,
        Key: fileName,
      });
-      await this._client.send(command);
+      await this._client.send(command).catch(err => {
+        throw new Error(`Failed to delete file from S3: ${err.message}`);
+      });
+    } else {
+      throw new Error(`Unable to extract filename from path: ${filePath}`);
    }
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async removeFile(filePath: string): Promise<void> {
const fileName = filePath.split('/').pop();
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command);
}
}
async removeFile(filePath: string): Promise<void> {
// Extract filename from URL path, handling both full URLs and relative paths
let fileName: string;
try {
const url = new URL(filePath);
fileName = url.pathname.split('/').pop() || '';
} catch {
// If not a valid URL, treat as relative path
fileName = filePath.split('/').pop() || '';
}
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command).catch(err => {
throw new Error(`Failed to delete file from S3: ${err.message}`);
});
} else {
throw new Error(`Unable to extract filename from path: ${filePath}`);
}
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts around lines 90 to 99,
the method extracting the filename from the filePath uses a simple split and pop
approach which is fragile. Replace this with a more robust path parsing method,
such as using the built-in path module's basename function, to reliably extract
the filename regardless of path format variations.

Comment on lines +3 to +6
import { makeId } from '@gitroom/nestjs-libraries/services/make.is';
import mime from 'mime-types';
// @ts-ignore
import { getExtension } from 'mime';
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Address import issues and improve type safety

There are several concerns with the imports:

  1. Suspicious import path: @gitroom/nestjs-libraries/services/make.is - verify this path is correct
  2. @ts-ignore usage: The @ts-ignore comment is a code smell and should be avoided

Let me verify the correct import path for the makeId function:

For the getExtension import, consider using a more type-safe approach:

-// @ts-ignore
-import { getExtension } from 'mime';
+import { extension } from 'mime-types';

Then update the usage accordingly in the methods.


🏁 Script executed:

#!/bin/bash
# Check if the makeId import path is correct
fd -t f "make.is" . | head -10
fd -t f -e ts -e js | xargs grep -l "makeId" | head -10

Length of output: 764


Update MIME extension handling and remove @ts-ignore

The makeId import path (@gitroom/nestjs-libraries/services/make.is) is correct, so no changes are needed there. Instead, please replace the @ts-ignore import from the mime package with the fully typed extension function from mime-types:

• Remove these lines:

- // @ts-ignore
- import { getExtension } from 'mime';

• Add this line (you can coalesce with the existing mime-types import):

+ import { extension } from 'mime-types';

• Then update any getExtension(...)! calls to:

- const ext = getExtension(file.mimetype)!;
+ const ext = extension(file.mimetype)!;

This removes the @ts-ignore, uses a type-safe API, and preserves the intended behavior.

🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 3 and 6,
remove the @ts-ignore import of getExtension from 'mime' and instead import the
extension function from 'mime-types' by merging it with the existing mime
import. Then, update all calls to getExtension(...)! to use extension(...)! to
ensure type safety and remove the need for @ts-ignore.

@egelhaus egelhaus requested a review from Copilot July 17, 2025 14:59
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds support for S3 and MinIO as storage providers for file uploads, expanding beyond the existing local and Cloudflare options. Users can now configure S3-compatible storage services using the STORAGE_PROVIDER environment variable.

  • Implements S3Storage and MinIOStorage classes with upload, download, and deletion capabilities
  • Updates the UploadFactory to handle 's3' and 'minio' provider types
  • Adds comprehensive environment variable configuration for S3 and MinIO services

Reviewed Changes

Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.

File Description
libraries/nestjs-libraries/src/upload/upload.factory.ts Adds factory cases for S3 and MinIO storage providers with environment variable configuration
libraries/nestjs-libraries/src/upload/s3.storage.ts Implements S3Storage class with AWS SDK v3 for S3-compatible services
libraries/nestjs-libraries/src/upload/minio.storage.ts Implements MinIOStorage class specifically configured for MinIO with path-style URLs
.env.example Documents new S3 configuration variables and updates storage provider options

Comment on lines +5 to +6
// @ts-ignore
import { getExtension } from 'mime';
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.

Suggested change
// @ts-ignore
import { getExtension } from 'mime';
import mime from 'mime-types';

Copilot uses AI. Check for mistakes.

Comment on lines +5 to +6
// @ts-ignore
import { getExtension } from 'mime';
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.

Suggested change
// @ts-ignore
import { getExtension } from 'mime';

Copilot uses AI. Check for mistakes.

Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.determineAcl(contentType),

Copilot uses AI. Check for mistakes.

Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.getAclForFile(file.mimetype),

Copilot uses AI. Check for mistakes.

Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.determineAcl(contentType),

Copilot uses AI. Check for mistakes.

Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: acl || 'private',

Copilot uses AI. Check for mistakes.

path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.

Suggested change
stream: file.buffer as any,
stream: Readable.from(file.buffer),

Copilot uses AI. Check for mistakes.

path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.

Suggested change
stream: file.buffer as any,
stream: Readable.from(file.buffer),

Copilot uses AI. Check for mistakes.

#S3_SECRET_ACCESS_KEY="your-s3-secret-access-key"
#S3_REGION="us-east-1"
#S3_BUCKET_NAME="your-s3-bucket-name"
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com"
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The S3_UPLOAD_URL environment variable is documented but not used in the S3Storage implementation. Consider removing this unused configuration or implementing its usage.

Suggested change
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com"

Copilot uses AI. Check for mistakes.

@egelhaus
Copy link
Collaborator

Consider to review the comments from @ copilot and @ coderabbitai
They seem to address both the same issues.

@gitroomhq gitroomhq deleted a comment from Azadbangladeshi-com Jul 18, 2025
@solonmedia
Copy link

Coming to postiz from mixpost I really miss the ability to connect to my S3 services. Hope it can get merged soon.

gabelul pushed a commit to gabelul/postiz-app that referenced this pull request Sep 26, 2025
- Add generic S3-compatible storage provider supporting any S3-compatible service
  - Supports AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, and more
  - Configurable public URLs with support for CDNs and custom domains
  - Signed URL generation for private buckets
  - Path-style and virtual-hosted-style URL support
  - Cloudflare R2 compatibility with checksum header handling

- Add FTP storage provider for traditional file transfer
  - Supports both FTP and FTPS (FTP over SSL/TLS)
  - Configurable passive/active modes
  - Connection pooling and timeout handling
  - Date-based directory organization

- Add SFTP storage provider for secure file transfer
  - SSH key and password authentication support
  - Connection keepalive and timeout management
  - Secure file transfer over SSH

- Update upload factory with comprehensive error handling
  - Detailed environment variable validation
  - Clear error messages for missing configuration
  - Support for all storage providers: local, cloudflare, s3-compatible, ftp, sftp

- Comprehensive environment configuration documentation
  - Detailed .env.example with examples for all providers
  - Clear separation between upload credentials and public URLs
  - Provider-specific configuration options

Closes gitroomhq#322
Supersedes gitroomhq#873
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants