Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 10 additions & 1 deletion .env.example
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,15 @@ CLOUDFLARE_BUCKETNAME="your-bucket-name"
CLOUDFLARE_BUCKET_URL="https://your-bucket-url.r2.cloudflarestorage.com/"
CLOUDFLARE_REGION="auto"

## S3 Configuration (for AWS S3 or S3-compatible services like MinIO)
## Set STORAGE_PROVIDER="s3" to use S3 storage
#S3_ACCESS_KEY_ID="your-s3-access-key-id"
#S3_SECRET_ACCESS_KEY="your-s3-secret-access-key"
#S3_REGION="us-east-1"
#S3_BUCKET_NAME="your-s3-bucket-name"
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com"
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The S3_UPLOAD_URL environment variable is documented but not used in the S3Storage implementation. Consider removing this unused configuration or implementing its usage.

Suggested change
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com"

Copilot uses AI. Check for mistakes.

#S3_ENDPOINT="" # Optional: Custom endpoint for S3-compatible services like MinIO

# === Common optional Settings

## This is a dummy key, you must create your own from Resend.
Expand All @@ -30,7 +39,7 @@ CLOUDFLARE_REGION="auto"
#EMAIL_FROM_NAME=""
#DISABLE_REGISTRATION=false

# Where will social media icons be saved - local or cloudflare.
# Where will social media icons be saved - local, cloudflare, or s3.
STORAGE_PROVIDER="local"

# Your upload directory path if you host your files locally, otherwise Cloudflare will be used.
Expand Down
99 changes: 99 additions & 0 deletions libraries/nestjs-libraries/src/upload/minio.storage.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,99 @@
import { S3Client, PutObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import 'multer';
import { makeId } from '@gitroom/nestjs-libraries/services/make.is';
import mime from 'mime-types';
// @ts-ignore
import { getExtension } from 'mime';
Comment on lines +5 to +6
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.

Suggested change
// @ts-ignore
import { getExtension } from 'mime';

Copilot uses AI. Check for mistakes.

import { IUploadProvider } from './upload.interface';

class MinIOStorage implements IUploadProvider {
private _client: S3Client;

constructor(
private _accessKeyId: string,
private _secretAccessKey: string,
private _region: string,
private _bucketName: string,
private _endpoint: string
) {
this._client = new S3Client({
endpoint: _endpoint,
region: _region,
credentials: {
accessKeyId: _accessKeyId,
secretAccessKey: _secretAccessKey,
},
forcePathStyle: true, // Required for MinIO
});
}

private getUploadUrl(fileName: string): string {
// For MinIO with path-style, the URL format is: endpoint/bucket/file
return `${this._endpoint}/${this._bucketName}/${fileName}`;
}

async uploadSimple(path: string) {
const loadImage = await fetch(path);
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType)!;
const id = makeId(10);

const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.determineAcl(contentType),

Copilot uses AI. Check for mistakes.

};

const command = new PutObjectCommand({ ...params });
await this._client.send(command);

return this.getUploadUrl(`${id}.${extension}`);
}

async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype) || '';

const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: acl || 'private',

Copilot uses AI. Check for mistakes.

});

await this._client.send(command);

const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.

Suggested change
stream: file.buffer as any,
stream: Readable.from(file.buffer),

Copilot uses AI. Check for mistakes.

};
}

async removeFile(filePath: string): Promise<void> {
const fileName = filePath.split('/').pop();
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command);
}
}
}

export { MinIOStorage };
export default MinIOStorage;
103 changes: 103 additions & 0 deletions libraries/nestjs-libraries/src/upload/s3.storage.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
import { S3Client, PutObjectCommand, DeleteObjectCommand } from '@aws-sdk/client-s3';
import 'multer';
import { makeId } from '@gitroom/nestjs-libraries/services/make.is';
import mime from 'mime-types';
// @ts-ignore
import { getExtension } from 'mime';
Comment on lines +3 to +6
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Address import issues and improve type safety

There are several concerns with the imports:

  1. Suspicious import path: @gitroom/nestjs-libraries/services/make.is - verify this path is correct
  2. @ts-ignore usage: The @ts-ignore comment is a code smell and should be avoided

Let me verify the correct import path for the makeId function:

For the getExtension import, consider using a more type-safe approach:

-// @ts-ignore
-import { getExtension } from 'mime';
+import { extension } from 'mime-types';

Then update the usage accordingly in the methods.


🏁 Script executed:

#!/bin/bash
# Check if the makeId import path is correct
fd -t f "make.is" . | head -10
fd -t f -e ts -e js | xargs grep -l "makeId" | head -10

Length of output: 764


Update MIME extension handling and remove @ts-ignore

The makeId import path (@gitroom/nestjs-libraries/services/make.is) is correct, so no changes are needed there. Instead, please replace the @ts-ignore import from the mime package with the fully typed extension function from mime-types:

• Remove these lines:

- // @ts-ignore
- import { getExtension } from 'mime';

• Add this line (you can coalesce with the existing mime-types import):

+ import { extension } from 'mime-types';

• Then update any getExtension(...)! calls to:

- const ext = getExtension(file.mimetype)!;
+ const ext = extension(file.mimetype)!;

This removes the @ts-ignore, uses a type-safe API, and preserves the intended behavior.

🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 3 and 6,
remove the @ts-ignore import of getExtension from 'mime' and instead import the
extension function from 'mime-types' by merging it with the existing mime
import. Then, update all calls to getExtension(...)! to use extension(...)! to
ensure type safety and remove the need for @ts-ignore.

Comment on lines +5 to +6
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.

Suggested change
// @ts-ignore
import { getExtension } from 'mime';
import mime from 'mime-types';

Copilot uses AI. Check for mistakes.

import { IUploadProvider } from './upload.interface';

class S3Storage implements IUploadProvider {
private _client: S3Client;

constructor(
private _accessKeyId: string,
private _secretAccessKey: string,
private _region: string,
private _bucketName: string,
private _endpoint?: string
) {
this._client = new S3Client({
endpoint: _endpoint, // Optional custom endpoint for S3-compatible services like MinIO
region: _region,
credentials: {
accessKeyId: _accessKeyId,
secretAccessKey: _secretAccessKey,
},
});
}

private getUploadUrl(fileName: string): string {
if (this._endpoint) {
// For custom S3-compatible endpoints (like MinIO), use endpoint/bucket/file
return `${this._endpoint}/${this._bucketName}/${fileName}`;
} else {
// For standard AWS S3, use bucket.s3.region.amazonaws.com/file
return `https://${this._bucketName}.s3.${this._region}.amazonaws.com/${fileName}`;
}
}

async uploadSimple(path: string) {
const loadImage = await fetch(path);
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType)!;
const id = makeId(10);

const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.determineAcl(contentType),

Copilot uses AI. Check for mistakes.

};

const command = new PutObjectCommand({ ...params });
await this._client.send(command);

return this.getUploadUrl(`${id}.${extension}`);
}
Comment on lines +39 to +59
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Improve error handling and content type validation

The method has potential robustness issues:

  1. Unsafe extension extraction: Line 44 uses ! assertion on getExtension which could throw if contentType is invalid
  2. Missing error handling: No error handling for fetch failures or S3 upload failures
  3. ACL verification needed: Using 'public-read' ACL - verify this is intentional for security
  async uploadSimple(path: string) {
-    const loadImage = await fetch(path);
+    const loadImage = await fetch(path).catch(err => {
+      throw new Error(`Failed to fetch file from ${path}: ${err.message}`);
+    });
    const contentType =
      loadImage?.headers?.get('content-type') ||
      loadImage?.headers?.get('Content-Type');
-    const extension = getExtension(contentType)!;
+    const extension = getExtension(contentType);
+    if (!extension) {
+      throw new Error(`Unable to determine file extension for content type: ${contentType}`);
+    }
    const id = makeId(10);

    const params = {
      Bucket: this._bucketName,
      Key: `${id}.${extension}`,
      Body: Buffer.from(await loadImage.arrayBuffer()),
      ContentType: contentType,
      ACL: 'public-read',
    };

    const command = new PutObjectCommand({ ...params });
-    await this._client.send(command);
+    await this._client.send(command).catch(err => {
+      throw new Error(`Failed to upload file to S3: ${err.message}`);
+    });

    return this.getUploadUrl(`${id}.${extension}`);
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async uploadSimple(path: string) {
const loadImage = await fetch(path);
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType)!;
const id = makeId(10);
const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};
const command = new PutObjectCommand({ ...params });
await this._client.send(command);
return this.getUploadUrl(`${id}.${extension}`);
}
async uploadSimple(path: string) {
const loadImage = await fetch(path).catch(err => {
throw new Error(`Failed to fetch file from ${path}: ${err.message}`);
});
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
const extension = getExtension(contentType);
if (!extension) {
throw new Error(`Unable to determine file extension for content type: ${contentType}`);
}
const id = makeId(10);
const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};
const command = new PutObjectCommand({ ...params });
await this._client.send(command).catch(err => {
throw new Error(`Failed to upload file to S3: ${err.message}`);
});
return this.getUploadUrl(`${id}.${extension}`);
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts lines 39 to 59, improve
robustness by first validating that contentType is not null or undefined before
calling getExtension, and handle the case where getExtension returns undefined
to avoid runtime errors. Add try-catch blocks around the fetch call and the S3
upload command to catch and handle possible failures gracefully. Finally, review
the use of 'public-read' ACL to confirm it aligns with security requirements,
and if necessary, make it configurable or document the security implications.


async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype) || '';

const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.

Suggested change
ACL: 'public-read',
ACL: this.getAclForFile(file.mimetype),

Copilot uses AI. Check for mistakes.

});

await this._client.send(command);

const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
Copy link
Preview

Copilot AI Jul 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.

Suggested change
stream: file.buffer as any,
stream: Readable.from(file.buffer),

Copilot uses AI. Check for mistakes.

};
}
Comment on lines +61 to +88
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Add error handling and improve type safety

Similar issues as uploadSimple method:

  1. Missing error handling for S3 upload failures
  2. Potential empty extension from mime.extension() call
  async uploadFile(file: Express.Multer.File): Promise<any> {
    const id = makeId(10);
-    const extension = mime.extension(file.mimetype) || '';
+    const extension = mime.extension(file.mimetype);
+    if (!extension) {
+      throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`);
+    }

    const command = new PutObjectCommand({
      Bucket: this._bucketName,
      Key: `${id}.${extension}`,
      Body: file.buffer,
      ContentType: file.mimetype,
      ACL: 'public-read',
    });

-    await this._client.send(command);
+    await this._client.send(command).catch(err => {
+      throw new Error(`Failed to upload file to S3: ${err.message}`);
+    });

    const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
    return {
      filename: `${id}.${extension}`,
      mimetype: file.mimetype,
      size: file.size,
      buffer: file.buffer,
      originalname: `${id}.${extension}`,
      fieldname: 'file',
      path: uploadUrl,
      destination: uploadUrl,
      encoding: '7bit',
      stream: file.buffer as any,
    };
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype) || '';
const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});
await this._client.send(command);
const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
const extension = mime.extension(file.mimetype);
if (!extension) {
throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`);
}
const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});
await this._client.send(command).catch(err => {
throw new Error(`Failed to upload file to S3: ${err.message}`);
});
const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 61 and 88,
the uploadFile method lacks error handling for potential S3 upload failures and
does not handle cases where mime.extension(file.mimetype) returns an empty
string. To fix this, wrap the S3 upload logic in a try-catch block to catch and
handle errors gracefully, and add a fallback or validation to ensure the file
extension is never empty before constructing the S3 object key.


async removeFile(filePath: string): Promise<void> {
const fileName = filePath.split('/').pop();
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command);
}
}
Comment on lines +90 to +99
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Improve file path parsing robustness

The current implementation of extracting filename from path is fragile and could fail with unexpected path formats.

  async removeFile(filePath: string): Promise<void> {
-    const fileName = filePath.split('/').pop();
+    // Extract filename from URL path, handling both full URLs and relative paths
+    let fileName: string;
+    try {
+      const url = new URL(filePath);
+      fileName = url.pathname.split('/').pop() || '';
+    } catch {
+      // If not a valid URL, treat as relative path
+      fileName = filePath.split('/').pop() || '';
+    }
+    
    if (fileName) {
      const command = new DeleteObjectCommand({
        Bucket: this._bucketName,
        Key: fileName,
      });
-      await this._client.send(command);
+      await this._client.send(command).catch(err => {
+        throw new Error(`Failed to delete file from S3: ${err.message}`);
+      });
+    } else {
+      throw new Error(`Unable to extract filename from path: ${filePath}`);
    }
  }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async removeFile(filePath: string): Promise<void> {
const fileName = filePath.split('/').pop();
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command);
}
}
async removeFile(filePath: string): Promise<void> {
// Extract filename from URL path, handling both full URLs and relative paths
let fileName: string;
try {
const url = new URL(filePath);
fileName = url.pathname.split('/').pop() || '';
} catch {
// If not a valid URL, treat as relative path
fileName = filePath.split('/').pop() || '';
}
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
await this._client.send(command).catch(err => {
throw new Error(`Failed to delete file from S3: ${err.message}`);
});
} else {
throw new Error(`Unable to extract filename from path: ${filePath}`);
}
}
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts around lines 90 to 99,
the method extracting the filename from the filePath uses a simple split and pop
approach which is fragile. Replace this with a more robust path parsing method,
such as using the built-in path module's basename function, to reliably extract
the filename regardless of path format variations.

}

export { S3Storage };
export default S3Storage;
18 changes: 18 additions & 0 deletions libraries/nestjs-libraries/src/upload/upload.factory.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
import { CloudflareStorage } from './cloudflare.storage';
import { IUploadProvider } from './upload.interface';
import { LocalStorage } from './local.storage';
import { S3Storage } from './s3.storage';
import { MinIOStorage } from './minio.storage';

export class UploadFactory {
static createStorage(): IUploadProvider {
Expand All @@ -18,6 +20,22 @@ export class UploadFactory {
process.env.CLOUDFLARE_BUCKETNAME!,
process.env.CLOUDFLARE_BUCKET_URL!
);
case 's3':
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
Comment on lines +23 to +38
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Consider validating environment variables and handling inconsistent endpoint requirements

The implementation looks good, but there are a few concerns:

  1. Environment variable validation: The ! assertions assume these variables are always defined, but no validation is performed
  2. Inconsistent endpoint handling: S3_ENDPOINT is optional while MINIO_ENDPOINT is required, which might confuse users

Consider adding validation and making the endpoint handling consistent:

      case 's3':
+        if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) {
+          throw new Error('Missing required S3 environment variables');
+        }
        return new S3Storage(
          process.env.S3_ACCESS_KEY_ID!,
          process.env.S3_SECRET_ACCESS_KEY!,
          process.env.S3_REGION!,
          process.env.S3_BUCKET_NAME!,
          process.env.S3_ENDPOINT
        );
      case 'minio':
+        if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) {
+          throw new Error('Missing required MinIO environment variables');
+        }
        return new MinIOStorage(
          process.env.MINIO_ACCESS_KEY!,
          process.env.MINIO_SECRET_KEY!,
          process.env.MINIO_REGION!,
          process.env.MINIO_BUCKET_NAME!,
          process.env.MINIO_ENDPOINT!
        );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
case 's3':
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
case 's3':
if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) {
throw new Error('Missing required S3 environment variables');
}
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) {
throw new Error('Missing required MinIO environment variables');
}
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/upload.factory.ts around lines 23 to
38, the environment variables are accessed with non-null assertions without
validation, and the endpoint parameters are inconsistently handled between S3
(optional) and MinIO (required). Fix this by adding explicit checks to validate
that all required environment variables are defined before use, throwing clear
errors if any are missing. Also, standardize the endpoint parameter handling by
either making it optional or required for both storage types to avoid confusion.

default:
throw new Error(`Invalid storage type ${storageProvider}`);
}
Expand Down
Loading