-
-
Notifications
You must be signed in to change notification settings - Fork 3.9k
feat: add support for s3 and minio as storage providers #873
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
feat: add support for s3 and minio as storage providers #873
Conversation
@muqsitnawaz is attempting to deploy a commit to the Listinai Team on Vercel. A member of the Team first needs to authorize it. |
WalkthroughSupport for S3-compatible storage providers has been added, including AWS S3 and MinIO. New classes for S3 and MinIO storage were introduced, and the upload factory was updated to instantiate these providers based on environment variables. The Changes
Sequence Diagram(s)sequenceDiagram
participant Client
participant UploadFactory
participant S3Storage
participant MinIOStorage
Client->>UploadFactory: createStorage("s3" or "minio")
alt "s3" selected
UploadFactory->>S3Storage: new S3Storage(env vars)
UploadFactory-->>Client: S3Storage instance
else "minio" selected
UploadFactory->>MinIOStorage: new MinIOStorage(env vars)
UploadFactory-->>Client: MinIOStorage instance
end
Client->>S3Storage: uploadFile(file) / uploadSimple(url) / removeFile(path)
Client->>MinIOStorage: uploadFile(file) / uploadSimple(url) / removeFile(path)
Poem
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 ESLint
libraries/nestjs-libraries/src/upload/minio.storage.tsOops! Something went wrong! :( ESLint: 8.57.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs libraries/nestjs-libraries/src/upload/s3.storage.tsOops! Something went wrong! :( ESLint: 8.57.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs libraries/nestjs-libraries/src/upload/upload.factory.tsOops! Something went wrong! :( ESLint: 8.57.0 Error [ERR_MODULE_NOT_FOUND]: Cannot find package '@eslint/eslintrc' imported from /eslint.config.mjs ✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
♻️ Duplicate comments (4)
libraries/nestjs-libraries/src/upload/minio.storage.ts (4)
1-7
: Same import issues as S3StorageThis file has identical import issues as the S3Storage class. Please apply the same fixes mentioned in the S3Storage review.
35-55
: Apply same error handling improvements as S3StorageThis method has identical issues as the S3Storage implementation. Please apply the same error handling and validation improvements mentioned in the S3Storage review.
57-84
: Apply same error handling improvements as S3StorageThis method has identical issues as the S3Storage implementation. Please apply the same error handling and validation improvements mentioned in the S3Storage review.
86-95
: Apply same file path parsing improvements as S3StorageThis method has identical issues as the S3Storage implementation. Please apply the same robustness improvements mentioned in the S3Storage review.
🧹 Nitpick comments (1)
libraries/nestjs-libraries/src/upload/minio.storage.ts (1)
9-96
: Consider refactoring to reduce code duplicationThere's significant code duplication between
S3Storage
andMinIOStorage
classes. Consider extracting common functionality into a base class or shared utilities.// Create a base class for S3-compatible storage abstract class S3CompatibleStorage implements IUploadProvider { protected _client: S3Client; protected _bucketName: string; constructor( accessKeyId: string, secretAccessKey: string, region: string, bucketName: string, endpoint?: string, options?: { forcePathStyle?: boolean } ) { this._bucketName = bucketName; this._client = new S3Client({ endpoint, region, credentials: { accessKeyId, secretAccessKey }, ...options, }); } protected abstract getUploadUrl(fileName: string): string; // Common implementations for uploadSimple, uploadFile, removeFile // ... } // Then extend for specific implementations class S3Storage extends S3CompatibleStorage { constructor(accessKeyId: string, secretAccessKey: string, region: string, bucketName: string, endpoint?: string) { super(accessKeyId, secretAccessKey, region, bucketName, endpoint); } protected getUploadUrl(fileName: string): string { // S3-specific URL logic } } class MinIOStorage extends S3CompatibleStorage { constructor(accessKeyId: string, secretAccessKey: string, region: string, bucketName: string, endpoint: string) { super(accessKeyId, secretAccessKey, region, bucketName, endpoint, { forcePathStyle: true }); } protected getUploadUrl(fileName: string): string { // MinIO-specific URL logic } }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
.env.example
(2 hunks)libraries/nestjs-libraries/src/upload/minio.storage.ts
(1 hunks)libraries/nestjs-libraries/src/upload/s3.storage.ts
(1 hunks)libraries/nestjs-libraries/src/upload/upload.factory.ts
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
.env.example (2)
Learnt from: DrummyFloyd
PR: gitroomhq/postiz-app#550
File: .env.example:92-93
Timestamp: 2025-02-03T09:04:59.568Z
Learning: In `.env.example`, when providing specific provider examples (like Authentik for OIDC), they should be commented out while keeping generic defaults uncommented to better reflect the application's provider-agnostic nature.
Learnt from: DrummyFloyd
PR: gitroomhq/postiz-app#550
File: .env.example:94-98
Timestamp: 2025-02-03T09:06:46.362Z
Learning: The environment variables POSTIZ_OAUTH_* in .env.example use Authentik-specific paths (/application/o/*) as examples, as they represent actual paths used in Authentik integration.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build-and-publish
🔇 Additional comments (6)
libraries/nestjs-libraries/src/upload/upload.factory.ts (1)
4-5
: LGTM: Clean import additionsThe imports for the new storage providers are properly added and follow the existing pattern.
.env.example (2)
23-31
: LGTM: Well-structured S3 configuration sectionThe S3 configuration variables are properly documented and commented out, following the established pattern for provider-specific configurations. The comments clearly explain the purpose and usage.
42-42
: LGTM: Updated storage provider optionsThe comment accurately reflects the new storage provider option while maintaining the existing documentation style.
libraries/nestjs-libraries/src/upload/s3.storage.ts (1)
29-37
: LGTM: Well-implemented URL generation with proper endpoint handlingThe
getUploadUrl
method correctly handles both custom endpoints and standard AWS S3 URLs, providing good flexibility for different S3-compatible services.libraries/nestjs-libraries/src/upload/minio.storage.ts (2)
12-28
: LGTM: Proper MinIO-specific configurationThe constructor correctly configures the S3 client for MinIO with
forcePathStyle: true
and makes the endpoint required, which is appropriate for MinIO usage.
30-33
: LGTM: Correct MinIO URL formatThe URL generation correctly uses path-style addressing which is required for MinIO.
case 's3': | ||
return new S3Storage( | ||
process.env.S3_ACCESS_KEY_ID!, | ||
process.env.S3_SECRET_ACCESS_KEY!, | ||
process.env.S3_REGION!, | ||
process.env.S3_BUCKET_NAME!, | ||
process.env.S3_ENDPOINT | ||
); | ||
case 'minio': | ||
return new MinIOStorage( | ||
process.env.MINIO_ACCESS_KEY!, | ||
process.env.MINIO_SECRET_KEY!, | ||
process.env.MINIO_REGION!, | ||
process.env.MINIO_BUCKET_NAME!, | ||
process.env.MINIO_ENDPOINT! | ||
); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Consider validating environment variables and handling inconsistent endpoint requirements
The implementation looks good, but there are a few concerns:
- Environment variable validation: The
!
assertions assume these variables are always defined, but no validation is performed - Inconsistent endpoint handling: S3_ENDPOINT is optional while MINIO_ENDPOINT is required, which might confuse users
Consider adding validation and making the endpoint handling consistent:
case 's3':
+ if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) {
+ throw new Error('Missing required S3 environment variables');
+ }
return new S3Storage(
process.env.S3_ACCESS_KEY_ID!,
process.env.S3_SECRET_ACCESS_KEY!,
process.env.S3_REGION!,
process.env.S3_BUCKET_NAME!,
process.env.S3_ENDPOINT
);
case 'minio':
+ if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) {
+ throw new Error('Missing required MinIO environment variables');
+ }
return new MinIOStorage(
process.env.MINIO_ACCESS_KEY!,
process.env.MINIO_SECRET_KEY!,
process.env.MINIO_REGION!,
process.env.MINIO_BUCKET_NAME!,
process.env.MINIO_ENDPOINT!
);
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
case 's3': | |
return new S3Storage( | |
process.env.S3_ACCESS_KEY_ID!, | |
process.env.S3_SECRET_ACCESS_KEY!, | |
process.env.S3_REGION!, | |
process.env.S3_BUCKET_NAME!, | |
process.env.S3_ENDPOINT | |
); | |
case 'minio': | |
return new MinIOStorage( | |
process.env.MINIO_ACCESS_KEY!, | |
process.env.MINIO_SECRET_KEY!, | |
process.env.MINIO_REGION!, | |
process.env.MINIO_BUCKET_NAME!, | |
process.env.MINIO_ENDPOINT! | |
); | |
case 's3': | |
if (!process.env.S3_ACCESS_KEY_ID || !process.env.S3_SECRET_ACCESS_KEY || !process.env.S3_REGION || !process.env.S3_BUCKET_NAME) { | |
throw new Error('Missing required S3 environment variables'); | |
} | |
return new S3Storage( | |
process.env.S3_ACCESS_KEY_ID!, | |
process.env.S3_SECRET_ACCESS_KEY!, | |
process.env.S3_REGION!, | |
process.env.S3_BUCKET_NAME!, | |
process.env.S3_ENDPOINT | |
); | |
case 'minio': | |
if (!process.env.MINIO_ACCESS_KEY || !process.env.MINIO_SECRET_KEY || !process.env.MINIO_REGION || !process.env.MINIO_BUCKET_NAME || !process.env.MINIO_ENDPOINT) { | |
throw new Error('Missing required MinIO environment variables'); | |
} | |
return new MinIOStorage( | |
process.env.MINIO_ACCESS_KEY!, | |
process.env.MINIO_SECRET_KEY!, | |
process.env.MINIO_REGION!, | |
process.env.MINIO_BUCKET_NAME!, | |
process.env.MINIO_ENDPOINT! | |
); |
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/upload.factory.ts around lines 23 to
38, the environment variables are accessed with non-null assertions without
validation, and the endpoint parameters are inconsistently handled between S3
(optional) and MinIO (required). Fix this by adding explicit checks to validate
that all required environment variables are defined before use, throwing clear
errors if any are missing. Also, standardize the endpoint parameter handling by
either making it optional or required for both storage types to avoid confusion.
async uploadFile(file: Express.Multer.File): Promise<any> { | ||
const id = makeId(10); | ||
const extension = mime.extension(file.mimetype) || ''; | ||
|
||
const command = new PutObjectCommand({ | ||
Bucket: this._bucketName, | ||
Key: `${id}.${extension}`, | ||
Body: file.buffer, | ||
ContentType: file.mimetype, | ||
ACL: 'public-read', | ||
}); | ||
|
||
await this._client.send(command); | ||
|
||
const uploadUrl = this.getUploadUrl(`${id}.${extension}`); | ||
return { | ||
filename: `${id}.${extension}`, | ||
mimetype: file.mimetype, | ||
size: file.size, | ||
buffer: file.buffer, | ||
originalname: `${id}.${extension}`, | ||
fieldname: 'file', | ||
path: uploadUrl, | ||
destination: uploadUrl, | ||
encoding: '7bit', | ||
stream: file.buffer as any, | ||
}; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add error handling and improve type safety
Similar issues as uploadSimple
method:
- Missing error handling for S3 upload failures
- Potential empty extension from
mime.extension()
call
async uploadFile(file: Express.Multer.File): Promise<any> {
const id = makeId(10);
- const extension = mime.extension(file.mimetype) || '';
+ const extension = mime.extension(file.mimetype);
+ if (!extension) {
+ throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`);
+ }
const command = new PutObjectCommand({
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: file.buffer,
ContentType: file.mimetype,
ACL: 'public-read',
});
- await this._client.send(command);
+ await this._client.send(command).catch(err => {
+ throw new Error(`Failed to upload file to S3: ${err.message}`);
+ });
const uploadUrl = this.getUploadUrl(`${id}.${extension}`);
return {
filename: `${id}.${extension}`,
mimetype: file.mimetype,
size: file.size,
buffer: file.buffer,
originalname: `${id}.${extension}`,
fieldname: 'file',
path: uploadUrl,
destination: uploadUrl,
encoding: '7bit',
stream: file.buffer as any,
};
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async uploadFile(file: Express.Multer.File): Promise<any> { | |
const id = makeId(10); | |
const extension = mime.extension(file.mimetype) || ''; | |
const command = new PutObjectCommand({ | |
Bucket: this._bucketName, | |
Key: `${id}.${extension}`, | |
Body: file.buffer, | |
ContentType: file.mimetype, | |
ACL: 'public-read', | |
}); | |
await this._client.send(command); | |
const uploadUrl = this.getUploadUrl(`${id}.${extension}`); | |
return { | |
filename: `${id}.${extension}`, | |
mimetype: file.mimetype, | |
size: file.size, | |
buffer: file.buffer, | |
originalname: `${id}.${extension}`, | |
fieldname: 'file', | |
path: uploadUrl, | |
destination: uploadUrl, | |
encoding: '7bit', | |
stream: file.buffer as any, | |
}; | |
} | |
async uploadFile(file: Express.Multer.File): Promise<any> { | |
const id = makeId(10); | |
const extension = mime.extension(file.mimetype); | |
if (!extension) { | |
throw new Error(`Unable to determine file extension for mimetype: ${file.mimetype}`); | |
} | |
const command = new PutObjectCommand({ | |
Bucket: this._bucketName, | |
Key: `${id}.${extension}`, | |
Body: file.buffer, | |
ContentType: file.mimetype, | |
ACL: 'public-read', | |
}); | |
await this._client.send(command).catch(err => { | |
throw new Error(`Failed to upload file to S3: ${err.message}`); | |
}); | |
const uploadUrl = this.getUploadUrl(`${id}.${extension}`); | |
return { | |
filename: `${id}.${extension}`, | |
mimetype: file.mimetype, | |
size: file.size, | |
buffer: file.buffer, | |
originalname: `${id}.${extension}`, | |
fieldname: 'file', | |
path: uploadUrl, | |
destination: uploadUrl, | |
encoding: '7bit', | |
stream: file.buffer as any, | |
}; | |
} |
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 61 and 88,
the uploadFile method lacks error handling for potential S3 upload failures and
does not handle cases where mime.extension(file.mimetype) returns an empty
string. To fix this, wrap the S3 upload logic in a try-catch block to catch and
handle errors gracefully, and add a fallback or validation to ensure the file
extension is never empty before constructing the S3 object key.
async uploadSimple(path: string) { | ||
const loadImage = await fetch(path); | ||
const contentType = | ||
loadImage?.headers?.get('content-type') || | ||
loadImage?.headers?.get('Content-Type'); | ||
const extension = getExtension(contentType)!; | ||
const id = makeId(10); | ||
|
||
const params = { | ||
Bucket: this._bucketName, | ||
Key: `${id}.${extension}`, | ||
Body: Buffer.from(await loadImage.arrayBuffer()), | ||
ContentType: contentType, | ||
ACL: 'public-read', | ||
}; | ||
|
||
const command = new PutObjectCommand({ ...params }); | ||
await this._client.send(command); | ||
|
||
return this.getUploadUrl(`${id}.${extension}`); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Improve error handling and content type validation
The method has potential robustness issues:
- Unsafe extension extraction: Line 44 uses
!
assertion ongetExtension
which could throw ifcontentType
is invalid - Missing error handling: No error handling for fetch failures or S3 upload failures
- ACL verification needed: Using
'public-read'
ACL - verify this is intentional for security
async uploadSimple(path: string) {
- const loadImage = await fetch(path);
+ const loadImage = await fetch(path).catch(err => {
+ throw new Error(`Failed to fetch file from ${path}: ${err.message}`);
+ });
const contentType =
loadImage?.headers?.get('content-type') ||
loadImage?.headers?.get('Content-Type');
- const extension = getExtension(contentType)!;
+ const extension = getExtension(contentType);
+ if (!extension) {
+ throw new Error(`Unable to determine file extension for content type: ${contentType}`);
+ }
const id = makeId(10);
const params = {
Bucket: this._bucketName,
Key: `${id}.${extension}`,
Body: Buffer.from(await loadImage.arrayBuffer()),
ContentType: contentType,
ACL: 'public-read',
};
const command = new PutObjectCommand({ ...params });
- await this._client.send(command);
+ await this._client.send(command).catch(err => {
+ throw new Error(`Failed to upload file to S3: ${err.message}`);
+ });
return this.getUploadUrl(`${id}.${extension}`);
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async uploadSimple(path: string) { | |
const loadImage = await fetch(path); | |
const contentType = | |
loadImage?.headers?.get('content-type') || | |
loadImage?.headers?.get('Content-Type'); | |
const extension = getExtension(contentType)!; | |
const id = makeId(10); | |
const params = { | |
Bucket: this._bucketName, | |
Key: `${id}.${extension}`, | |
Body: Buffer.from(await loadImage.arrayBuffer()), | |
ContentType: contentType, | |
ACL: 'public-read', | |
}; | |
const command = new PutObjectCommand({ ...params }); | |
await this._client.send(command); | |
return this.getUploadUrl(`${id}.${extension}`); | |
} | |
async uploadSimple(path: string) { | |
const loadImage = await fetch(path).catch(err => { | |
throw new Error(`Failed to fetch file from ${path}: ${err.message}`); | |
}); | |
const contentType = | |
loadImage?.headers?.get('content-type') || | |
loadImage?.headers?.get('Content-Type'); | |
const extension = getExtension(contentType); | |
if (!extension) { | |
throw new Error(`Unable to determine file extension for content type: ${contentType}`); | |
} | |
const id = makeId(10); | |
const params = { | |
Bucket: this._bucketName, | |
Key: `${id}.${extension}`, | |
Body: Buffer.from(await loadImage.arrayBuffer()), | |
ContentType: contentType, | |
ACL: 'public-read', | |
}; | |
const command = new PutObjectCommand({ ...params }); | |
await this._client.send(command).catch(err => { | |
throw new Error(`Failed to upload file to S3: ${err.message}`); | |
}); | |
return this.getUploadUrl(`${id}.${extension}`); | |
} |
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts lines 39 to 59, improve
robustness by first validating that contentType is not null or undefined before
calling getExtension, and handle the case where getExtension returns undefined
to avoid runtime errors. Add try-catch blocks around the fetch call and the S3
upload command to catch and handle possible failures gracefully. Finally, review
the use of 'public-read' ACL to confirm it aligns with security requirements,
and if necessary, make it configurable or document the security implications.
async removeFile(filePath: string): Promise<void> { | ||
const fileName = filePath.split('/').pop(); | ||
if (fileName) { | ||
const command = new DeleteObjectCommand({ | ||
Bucket: this._bucketName, | ||
Key: fileName, | ||
}); | ||
await this._client.send(command); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Improve file path parsing robustness
The current implementation of extracting filename from path is fragile and could fail with unexpected path formats.
async removeFile(filePath: string): Promise<void> {
- const fileName = filePath.split('/').pop();
+ // Extract filename from URL path, handling both full URLs and relative paths
+ let fileName: string;
+ try {
+ const url = new URL(filePath);
+ fileName = url.pathname.split('/').pop() || '';
+ } catch {
+ // If not a valid URL, treat as relative path
+ fileName = filePath.split('/').pop() || '';
+ }
+
if (fileName) {
const command = new DeleteObjectCommand({
Bucket: this._bucketName,
Key: fileName,
});
- await this._client.send(command);
+ await this._client.send(command).catch(err => {
+ throw new Error(`Failed to delete file from S3: ${err.message}`);
+ });
+ } else {
+ throw new Error(`Unable to extract filename from path: ${filePath}`);
}
}
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
async removeFile(filePath: string): Promise<void> { | |
const fileName = filePath.split('/').pop(); | |
if (fileName) { | |
const command = new DeleteObjectCommand({ | |
Bucket: this._bucketName, | |
Key: fileName, | |
}); | |
await this._client.send(command); | |
} | |
} | |
async removeFile(filePath: string): Promise<void> { | |
// Extract filename from URL path, handling both full URLs and relative paths | |
let fileName: string; | |
try { | |
const url = new URL(filePath); | |
fileName = url.pathname.split('/').pop() || ''; | |
} catch { | |
// If not a valid URL, treat as relative path | |
fileName = filePath.split('/').pop() || ''; | |
} | |
if (fileName) { | |
const command = new DeleteObjectCommand({ | |
Bucket: this._bucketName, | |
Key: fileName, | |
}); | |
await this._client.send(command).catch(err => { | |
throw new Error(`Failed to delete file from S3: ${err.message}`); | |
}); | |
} else { | |
throw new Error(`Unable to extract filename from path: ${filePath}`); | |
} | |
} |
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts around lines 90 to 99,
the method extracting the filename from the filePath uses a simple split and pop
approach which is fragile. Replace this with a more robust path parsing method,
such as using the built-in path module's basename function, to reliably extract
the filename regardless of path format variations.
import { makeId } from '@gitroom/nestjs-libraries/services/make.is'; | ||
import mime from 'mime-types'; | ||
// @ts-ignore | ||
import { getExtension } from 'mime'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Address import issues and improve type safety
There are several concerns with the imports:
- Suspicious import path:
@gitroom/nestjs-libraries/services/make.is
- verify this path is correct - @ts-ignore usage: The
@ts-ignore
comment is a code smell and should be avoided
Let me verify the correct import path for the makeId function:
For the getExtension
import, consider using a more type-safe approach:
-// @ts-ignore
-import { getExtension } from 'mime';
+import { extension } from 'mime-types';
Then update the usage accordingly in the methods.
🏁 Script executed:
#!/bin/bash
# Check if the makeId import path is correct
fd -t f "make.is" . | head -10
fd -t f -e ts -e js | xargs grep -l "makeId" | head -10
Length of output: 764
Update MIME extension handling and remove @ts-ignore
The makeId
import path (@gitroom/nestjs-libraries/services/make.is
) is correct, so no changes are needed there. Instead, please replace the @ts-ignore
import from the mime
package with the fully typed extension
function from mime-types
:
• Remove these lines:
- // @ts-ignore
- import { getExtension } from 'mime';
• Add this line (you can coalesce with the existing mime-types
import):
+ import { extension } from 'mime-types';
• Then update any getExtension(...)!
calls to:
- const ext = getExtension(file.mimetype)!;
+ const ext = extension(file.mimetype)!;
This removes the @ts-ignore
, uses a type-safe API, and preserves the intended behavior.
🤖 Prompt for AI Agents
In libraries/nestjs-libraries/src/upload/s3.storage.ts between lines 3 and 6,
remove the @ts-ignore import of getExtension from 'mime' and instead import the
extension function from 'mime-types' by merging it with the existing mime
import. Then, update all calls to getExtension(...)! to use extension(...)! to
ensure type safety and remove the need for @ts-ignore.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds support for S3 and MinIO as storage providers for file uploads, expanding beyond the existing local and Cloudflare options. Users can now configure S3-compatible storage services using the STORAGE_PROVIDER
environment variable.
- Implements S3Storage and MinIOStorage classes with upload, download, and deletion capabilities
- Updates the UploadFactory to handle 's3' and 'minio' provider types
- Adds comprehensive environment variable configuration for S3 and MinIO services
Reviewed Changes
Copilot reviewed 4 out of 4 changed files in this pull request and generated 9 comments.
File | Description |
---|---|
libraries/nestjs-libraries/src/upload/upload.factory.ts | Adds factory cases for S3 and MinIO storage providers with environment variable configuration |
libraries/nestjs-libraries/src/upload/s3.storage.ts | Implements S3Storage class with AWS SDK v3 for S3-compatible services |
libraries/nestjs-libraries/src/upload/minio.storage.ts | Implements MinIOStorage class specifically configured for MinIO with path-style URLs |
.env.example | Documents new S3 configuration variables and updates storage provider options |
// @ts-ignore | ||
import { getExtension } from 'mime'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.
// @ts-ignore | |
import { getExtension } from 'mime'; | |
import mime from 'mime-types'; |
Copilot uses AI. Check for mistakes.
// @ts-ignore | ||
import { getExtension } from 'mime'; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using @ts-ignore suppresses TypeScript errors without addressing the underlying issue. Consider using proper type definitions or a more specific type assertion.
// @ts-ignore | |
import { getExtension } from 'mime'; |
Copilot uses AI. Check for mistakes.
Key: `${id}.${extension}`, | ||
Body: Buffer.from(await loadImage.arrayBuffer()), | ||
ContentType: contentType, | ||
ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.
ACL: 'public-read', | |
ACL: this.determineAcl(contentType), |
Copilot uses AI. Check for mistakes.
Key: `${id}.${extension}`, | ||
Body: file.buffer, | ||
ContentType: file.mimetype, | ||
ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.
ACL: 'public-read', | |
ACL: this.getAclForFile(file.mimetype), |
Copilot uses AI. Check for mistakes.
Key: `${id}.${extension}`, | ||
Body: Buffer.from(await loadImage.arrayBuffer()), | ||
ContentType: contentType, | ||
ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.
ACL: 'public-read', | |
ACL: this.determineAcl(contentType), |
Copilot uses AI. Check for mistakes.
Key: `${id}.${extension}`, | ||
Body: file.buffer, | ||
ContentType: file.mimetype, | ||
ACL: 'public-read', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setting ACL to 'public-read' makes uploaded files publicly accessible. Consider if this is intended behavior and whether access should be more restricted based on the file type or user permissions.
ACL: 'public-read', | |
ACL: acl || 'private', |
Copilot uses AI. Check for mistakes.
path: uploadUrl, | ||
destination: uploadUrl, | ||
encoding: '7bit', | ||
stream: file.buffer as any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.
stream: file.buffer as any, | |
stream: Readable.from(file.buffer), |
Copilot uses AI. Check for mistakes.
path: uploadUrl, | ||
destination: uploadUrl, | ||
encoding: '7bit', | ||
stream: file.buffer as any, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Using 'as any' bypasses type safety. Consider defining a proper type or using a more specific type assertion if the buffer needs to be treated as a stream.
stream: file.buffer as any, | |
stream: Readable.from(file.buffer), |
Copilot uses AI. Check for mistakes.
#S3_SECRET_ACCESS_KEY="your-s3-secret-access-key" | ||
#S3_REGION="us-east-1" | ||
#S3_BUCKET_NAME="your-s3-bucket-name" | ||
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The S3_UPLOAD_URL environment variable is documented but not used in the S3Storage implementation. Consider removing this unused configuration or implementing its usage.
#S3_UPLOAD_URL="https://your-s3-bucket-name.s3.amazonaws.com" |
Copilot uses AI. Check for mistakes.
Consider to review the comments from @ copilot and @ coderabbitai |
Coming to postiz from mixpost I really miss the ability to connect to my S3 services. Hope it can get merged soon. |
- Add generic S3-compatible storage provider supporting any S3-compatible service - Supports AWS S3, MinIO, DigitalOcean Spaces, Backblaze B2, Wasabi, and more - Configurable public URLs with support for CDNs and custom domains - Signed URL generation for private buckets - Path-style and virtual-hosted-style URL support - Cloudflare R2 compatibility with checksum header handling - Add FTP storage provider for traditional file transfer - Supports both FTP and FTPS (FTP over SSL/TLS) - Configurable passive/active modes - Connection pooling and timeout handling - Date-based directory organization - Add SFTP storage provider for secure file transfer - SSH key and password authentication support - Connection keepalive and timeout management - Secure file transfer over SSH - Update upload factory with comprehensive error handling - Detailed environment variable validation - Clear error messages for missing configuration - Support for all storage providers: local, cloudflare, s3-compatible, ftp, sftp - Comprehensive environment configuration documentation - Detailed .env.example with examples for all providers - Clear separation between upload credentials and public URLs - Provider-specific configuration options Closes gitroomhq#322 Supersedes gitroomhq#873
What kind of change does this PR introduce?
support for s3 and minio for storage. you can configure it using the
STORAGE_PROVIDER
env variableWhy was this change needed?
#322
Other information:
no
Checklist:
Put a "X" in the boxes below to indicate you have followed the checklist;
Summary by CodeRabbit
New Features
Documentation