Skip to content

Add S3 presigned URL uploads to bypass server proxying #2358

@FelixMalfait

Description

@FelixMalfait

Task: Add S3 presigned URL uploads to bypass server proxying

Context

We just landed presigned URL downloads (PR #18864 on branch feat/s3-presigned-url-redirect). When STORAGE_S3_PRESIGNED_URL_BASE is set, the file controller returns a 302 redirect to a presigned S3 URL instead of proxying file bytes through the server.

Now we want the same for uploads: instead of the client uploading file bytes to the server (which then forwards to S3), the server should return a presigned PUT URL so the client uploads directly to S3.

Current upload flow

  1. Frontend calls uploadFilesFieldFile(file, fieldMetadataId) GraphQL mutation — sends the full file bytes via GraphQL multipart upload
  2. Server receives the entire file into a buffer (streamToBuffer), runs extractFileInfo + sanitizeFile, writes to S3 via FileStorageService.writeFile, creates a FileEntity record, returns { id, url }
  3. Frontend then links the file to the record (separate record update mutation)

Key server files:

  • packages/twenty-server/src/engine/core-modules/file/files-field/resolvers/files-field.resolver.ts — GraphQL resolver, receives GraphQLUpload, calls service
  • packages/twenty-server/src/engine/core-modules/file/files-field/services/files-field.service.tsuploadFile() method: validates, sanitizes, writes to S3, creates FileEntity
  • packages/twenty-server/src/engine/core-modules/file/file-workflow/resolvers/file-workflow.resolver.ts — similar pattern for workflow files
  • packages/twenty-server/src/engine/core-modules/file/file-ai-chat/resolvers/file-ai-chat.resolver.ts — similar pattern for AI chat files

Key frontend files:

  • packages/twenty-front/src/modules/object-record/record-field/ui/meta-types/hooks/useUploadFilesFieldFile.ts
  • packages/twenty-front/src/modules/advanced-text-editor/hooks/useUploadWorkflowFile.ts
  • packages/twenty-front/src/modules/ai/hooks/useAIChatFileUpload.ts
  • packages/twenty-front/src/modules/object-record/record-show/hooks/usePersonAvatarUpload.ts

Target upload flow

Step 1a (lightweight server call): Frontend calls a new requestFileUpload mutation with just metadata (filename, mimeType from File.type, fieldMetadataId). Server validates the MIME type against an allowlist, creates the FileEntity record, generates a presigned PutObjectCommand URL with the MIME type baked into the signature, and returns { presignedUrl, fileId, signedDownloadUrl }.

Step 1b (direct to S3): Frontend does fetch(presignedUrl, { method: 'PUT', body: file, headers: { 'Content-Type': mimeType } }) — file bytes go straight to S3, never touch the server.

Step 2 (unchanged): Frontend links the fileId to the record via the existing record update mutation.

For the local driver (or S3 without STORAGE_S3_PRESIGNED_URL_BASE), the presigned upload URL is null and the frontend falls back to the existing GraphQL upload flow. So the existing mutations must remain functional.

Implementation plan

Backend

  1. Add getPresignedUploadUrl to StorageDriver interface (drivers/interfaces/storage-driver.interface.ts). Same pattern as the existing getPresignedUrl but for PutObjectCommand. Returns string | null. Parameters: filePath, expiresInSeconds?, contentType?.

  2. Implement in S3Driver — use PutObjectCommand with getSignedUrl from @aws-sdk/s3-request-presigner, using the existing presignClient. Return null when presignClient is not configured.

  3. Implement in LocalDriver — return null.

  4. Delegate in ValidatedStorageDriver — path traversal check + delegate.

  5. Add getPresignedUploadUrl to FileStorageService — same pattern as existing getPresignedUrl.

  6. Add new requestFileUpload mutation — this replaces step 1 of the upload flow. It should:

    • Validate MIME type against an allowlist (reuse the existing INLINE_SAFE_MIME_TYPES set from get-content-disposition.utils.ts, plus allow common document types)
    • Create the FileEntity record
    • Call getPresignedUploadUrl on the storage service
    • Return { presignedUrl: string | null, fileId: string, signedDownloadUrl: string }
    • When presignedUrl is null, the frontend knows to fall back to the existing upload flow
  7. Keep existing upload mutations unchanged — they continue to work for local driver and as fallback.

Frontend

  1. Create a usePresignedUpload hook that encapsulates the two-step logic:

    • Call requestFileUpload mutation
    • If presignedUrl is returned, PUT directly to S3
    • If presignedUrl is null, fall back to the existing GraphQL upload mutation
    • Return the same shape as today ({ fileId, url })
  2. Update upload hooks to use usePresignedUpload:

    • useUploadFilesFieldFile
    • useUploadWorkflowFile
    • useAIChatFileUpload
    • usePersonAvatarUpload

MIME type handling

  • Use File.type from the browser (reliable, uses OS-level magic byte detection)
  • Server validates the claimed MIME type against an allowlist before signing
  • The presigned PUT URL includes ContentType in the signed params so the client can't change it after signing
  • The FileEntity stores the MIME type (same as today)

File sanitization

  • SVG sanitization (stripping scripts) currently happens server-side. With direct uploads, SVGs bypass the server. For now, reject SVG uploads via the MIME type allowlist in the requestFileUpload mutation. SVG sanitization via a post-upload worker can be added later if needed.
  • Other sanitization (the existing sanitizeFile) only acts on SVGs today, so rejecting SVGs covers this gap.

Testing

  • Unit tests for the new getPresignedUploadUrl on S3Driver, LocalDriver, ValidatedStorageDriver
  • Unit tests for the requestFileUpload mutation (MIME validation, presigned URL generation, fallback to null)
  • Update existing controller/service specs
  • Manual test: with MinIO running locally (docker run -d --name twenty-minio -p 9000:9000 -p 9001:9001 -e MINIO_ROOT_USER=minioadmin -e MINIO_ROOT_PASSWORD=minioadmin minio/minio server /data --console-address ":9001" then create bucket), upload a file and verify it goes directly to S3 (check MinIO console, check browser network tab shows PUT to localhost:9000)

Important constraints

  • One export per file
  • Don't abbreviate variable names
  • Use isDefined from twenty-shared/utils instead of manual null checks
  • Named exports only (no default exports)
  • Comments should explain WHY, not WHAT

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

🔖 Planned

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions