-
Notifications
You must be signed in to change notification settings - Fork 0
Description
Task: Add S3 presigned URL uploads to bypass server proxying
Context
We just landed presigned URL downloads (PR #18864 on branch feat/s3-presigned-url-redirect). When STORAGE_S3_PRESIGNED_URL_BASE is set, the file controller returns a 302 redirect to a presigned S3 URL instead of proxying file bytes through the server.
Now we want the same for uploads: instead of the client uploading file bytes to the server (which then forwards to S3), the server should return a presigned PUT URL so the client uploads directly to S3.
Current upload flow
- Frontend calls
uploadFilesFieldFile(file, fieldMetadataId)GraphQL mutation — sends the full file bytes via GraphQL multipart upload - Server receives the entire file into a buffer (
streamToBuffer), runsextractFileInfo+sanitizeFile, writes to S3 viaFileStorageService.writeFile, creates aFileEntityrecord, returns{ id, url } - Frontend then links the file to the record (separate record update mutation)
Key server files:
packages/twenty-server/src/engine/core-modules/file/files-field/resolvers/files-field.resolver.ts— GraphQL resolver, receivesGraphQLUpload, calls servicepackages/twenty-server/src/engine/core-modules/file/files-field/services/files-field.service.ts—uploadFile()method: validates, sanitizes, writes to S3, creates FileEntitypackages/twenty-server/src/engine/core-modules/file/file-workflow/resolvers/file-workflow.resolver.ts— similar pattern for workflow filespackages/twenty-server/src/engine/core-modules/file/file-ai-chat/resolvers/file-ai-chat.resolver.ts— similar pattern for AI chat files
Key frontend files:
packages/twenty-front/src/modules/object-record/record-field/ui/meta-types/hooks/useUploadFilesFieldFile.tspackages/twenty-front/src/modules/advanced-text-editor/hooks/useUploadWorkflowFile.tspackages/twenty-front/src/modules/ai/hooks/useAIChatFileUpload.tspackages/twenty-front/src/modules/object-record/record-show/hooks/usePersonAvatarUpload.ts
Target upload flow
Step 1a (lightweight server call): Frontend calls a new requestFileUpload mutation with just metadata (filename, mimeType from File.type, fieldMetadataId). Server validates the MIME type against an allowlist, creates the FileEntity record, generates a presigned PutObjectCommand URL with the MIME type baked into the signature, and returns { presignedUrl, fileId, signedDownloadUrl }.
Step 1b (direct to S3): Frontend does fetch(presignedUrl, { method: 'PUT', body: file, headers: { 'Content-Type': mimeType } }) — file bytes go straight to S3, never touch the server.
Step 2 (unchanged): Frontend links the fileId to the record via the existing record update mutation.
For the local driver (or S3 without STORAGE_S3_PRESIGNED_URL_BASE), the presigned upload URL is null and the frontend falls back to the existing GraphQL upload flow. So the existing mutations must remain functional.
Implementation plan
Backend
-
Add
getPresignedUploadUrltoStorageDriverinterface (drivers/interfaces/storage-driver.interface.ts). Same pattern as the existinggetPresignedUrlbut forPutObjectCommand. Returnsstring | null. Parameters:filePath,expiresInSeconds?,contentType?. -
Implement in
S3Driver— usePutObjectCommandwithgetSignedUrlfrom@aws-sdk/s3-request-presigner, using the existingpresignClient. ReturnnullwhenpresignClientis not configured. -
Implement in
LocalDriver— returnnull. -
Delegate in
ValidatedStorageDriver— path traversal check + delegate. -
Add
getPresignedUploadUrltoFileStorageService— same pattern as existinggetPresignedUrl. -
Add new
requestFileUploadmutation — this replaces step 1 of the upload flow. It should:- Validate MIME type against an allowlist (reuse the existing
INLINE_SAFE_MIME_TYPESset fromget-content-disposition.utils.ts, plus allow common document types) - Create the
FileEntityrecord - Call
getPresignedUploadUrlon the storage service - Return
{ presignedUrl: string | null, fileId: string, signedDownloadUrl: string } - When
presignedUrlisnull, the frontend knows to fall back to the existing upload flow
- Validate MIME type against an allowlist (reuse the existing
-
Keep existing upload mutations unchanged — they continue to work for local driver and as fallback.
Frontend
-
Create a
usePresignedUploadhook that encapsulates the two-step logic:- Call
requestFileUploadmutation - If
presignedUrlis returned,PUTdirectly to S3 - If
presignedUrlisnull, fall back to the existing GraphQL upload mutation - Return the same shape as today (
{ fileId, url })
- Call
-
Update upload hooks to use
usePresignedUpload:useUploadFilesFieldFileuseUploadWorkflowFileuseAIChatFileUploadusePersonAvatarUpload
MIME type handling
- Use
File.typefrom the browser (reliable, uses OS-level magic byte detection) - Server validates the claimed MIME type against an allowlist before signing
- The presigned PUT URL includes
ContentTypein the signed params so the client can't change it after signing - The
FileEntitystores the MIME type (same as today)
File sanitization
- SVG sanitization (stripping scripts) currently happens server-side. With direct uploads, SVGs bypass the server. For now, reject SVG uploads via the MIME type allowlist in the
requestFileUploadmutation. SVG sanitization via a post-upload worker can be added later if needed. - Other sanitization (the existing
sanitizeFile) only acts on SVGs today, so rejecting SVGs covers this gap.
Testing
- Unit tests for the new
getPresignedUploadUrlon S3Driver, LocalDriver, ValidatedStorageDriver - Unit tests for the
requestFileUploadmutation (MIME validation, presigned URL generation, fallback to null) - Update existing controller/service specs
- Manual test: with MinIO running locally (
docker run -d --name twenty-minio -p 9000:9000 -p 9001:9001 -e MINIO_ROOT_USER=minioadmin -e MINIO_ROOT_PASSWORD=minioadmin minio/minio server /data --console-address ":9001"then create bucket), upload a file and verify it goes directly to S3 (check MinIO console, check browser network tab shows PUT to localhost:9000)
Important constraints
- One export per file
- Don't abbreviate variable names
- Use
isDefinedfromtwenty-shared/utilsinstead of manual null checks - Named exports only (no default exports)
- Comments should explain WHY, not WHAT
Metadata
Metadata
Assignees
Labels
Type
Projects
Status