You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Tusflow implements smart chunk management to handle large file uploads efficiently and reliably.
12
+
Tusflow implements advanced chunk management to handle large file uploads efficiently and reliably, leveraging edge computing capabilities and dynamic optimization.
13
13
14
14
<Callouttype="info">
15
-
The default chunk size is 5MB, but Tusflow automatically adjusts this based on network conditions and file characteristics.
15
+
Tusflow automatically adjusts chunk sizes based on network conditions, file characteristics, and edge worker constraints, with sizes ranging from 5MB to 50MB.
16
16
</Callout>
17
17
18
-
## How It Works
18
+
## Key Components
19
19
20
20
<Steps>
21
-
### File Analysis
22
-
When a file is selected for upload, Tusflow analyzes its size and type to determine the optimal chunking strategy.
23
-
24
-
### Chunk Creation
25
-
The file is split into manageable chunks, with size determined by:
26
-
- Network conditions
27
-
- File size
28
-
- Memory constraints
29
-
- Client capabilities
21
+
### Dynamic Chunk Sizing
22
+
Optimizes chunk size based on network speed, file size, and edge worker memory constraints.
30
23
31
24
### Parallel Processing
32
-
Multiple chunks are uploaded simultaneously to maximize throughput while maintaining stability.
25
+
Uploads multiple chunks simultaneously to maximize throughput while maintaining stability.
33
26
34
-
### Assembly
35
-
Once all chunks are uploaded, they're automatically assembled in the correct order.
36
-
</Steps>
27
+
### Adaptive Batching
28
+
Adjusts the number of concurrent uploads based on network conditions.
37
29
30
+
### Checksum Verification
31
+
Ensures data integrity through MD5, SHA1, SHA256, or SHA512 checksums.
32
+
</Steps>
38
33
39
34
## Features
40
35
41
-
### Dynamic Sizing
42
-
Tusflow automatically adjusts chunk sizes based on:
- Ensures optimal upload performance and resource utilization
40
+
41
+
### Parallel Upload Management
42
+
- Concurrent chunk uploads (up to 10 simultaneous uploads)
43
+
- Adaptive batch sizing based on network conditions
44
+
- Progress tracking per chunk with rate-limited updates
45
+
46
+
### Error Handling and Recovery
47
+
- Automatic retries with exponential backoff
56
48
- Partial upload recovery
57
-
- Checksum verification
58
-
- Progress preservation
49
+
- Circuit breaker implementation for failure management
59
50
60
-
## Best Practices
51
+
### Performance Optimization
52
+
- Exponential moving average for network speed calculation
53
+
- Memory-aware chunk processing
54
+
- Efficient use of edge worker resources
55
+
56
+
## How It Works
61
57
62
58
<Callouttype="warning">
63
-
While parallel uploads can improve speed, too many concurrent uploads can degrade performance. Start with 3-5 parallel uploads and adjust based on your needs.
59
+
Understanding the chunk management process is crucial for optimizing your application's upload performanceand reliability.
64
60
</Callout>
65
61
66
-
1.**Chunk Size Selection**
67
-
- Consider network conditions
68
-
- Balance memory usage
69
-
- Account for client capabilities
62
+
1.**File Analysis**
63
+
- Analyze file size and type
64
+
- Calculate initial network speed
65
+
66
+
2.**Chunk Creation**
67
+
- Determine optimal chunk size using `calculateOptimalChunkSize` function
68
+
- Consider edge worker memory constraints and network conditions
69
+
70
+
3.**Parallel Processing**
71
+
- Upload chunks in parallel using adaptive batch sizes
72
+
- Dynamically adjust concurrency based on network speed
73
+
74
+
4.**Progress Tracking**
75
+
- Update upload progress with rate limiting
76
+
- Store metadata in Upstash Redis for resumability
77
+
78
+
5.**Completion and Verification**
79
+
- Verify all parts are uploaded correctly
80
+
- Complete multipart upload on S3
81
+
82
+
## Best Practices
83
+
84
+
1.**Optimal Chunk Size Selection**
85
+
- Let Tusflow handle dynamic sizing
86
+
- Monitor and adjust `CHUNK_SIZE` configuration if needed
70
87
71
-
2.**Error Recovery**
72
-
- Implement retry logic
73
-
- Handle network failures
74
-
- Maintain chunk integrity
88
+
2.**Parallel Upload Configuration**
89
+
- Start with default `MAX_CONCURRENT` and `BATCH_SIZE` values
90
+
- Adjust based on your specific use case and performance requirements
75
91
76
-
3.**Performance Optimization**
77
-
- Use parallel uploads
78
-
- Implement caching
79
-
- Monitor upload speeds
92
+
3.**Error Handling**
93
+
- Implement client-side retry logic
94
+
- Utilize Tusflow's built-in circuit breaker for failure management
By leveraging intelligent chunk management, parallel processing, and adaptive optimization, Tusflow provides a powerful and flexible upload solution that can handle large files, network fluctuations, and high concurrency with ease.
description: Resumable uploads allow clients to pause and resume file uploads, making the process more resilient to network issues and providing a better user experience.
3
+
description: Learn how Tusflow leverages Upstash Redis, AWS S3 multipart uploads, and the TUS protocol for efficient and reliable resumable file uploads.
Tusflow utilizes a powerful combination of Upstash Redis, AWS S3 multipart uploads, and the TUS protocol to provide a robust and efficient resumable upload solution.
The upload can be paused at any time. The client should keep track of the upload URL and the number of bytes uploaded.
29
-
30
-
## Resuming an Upload
31
-
32
-
To resume an upload, send a HEAD request to get the current offset:
33
-
34
-
```http
35
-
HEAD /files/24e533e02ec3bc40c387f1a0e460e216
36
-
Host: your-api.example.com
37
-
Tus-Resumable: 1.0.0
38
-
```
39
-
40
-
Response:
41
-
42
-
```http
43
-
HTTP/1.1 200 OK
44
-
Upload-Offset: 50000000
45
-
Tus-Resumable: 1.0.0
46
-
```
47
-
48
-
Then continue uploading from the offset:
49
-
50
-
```http
51
-
PATCH /files/24e533e02ec3bc40c387f1a0e460e216
52
-
Host: your-api.example.com
53
-
Tus-Resumable: 1.0.0
54
-
Upload-Offset: 50000000
55
-
Content-Type: application/offset+octet-stream
56
-
Content-Length: 10000000
57
-
58
-
[BINARY DATA]
59
-
```
60
-
61
-
## Error Handling
62
-
63
-
If a connection is lost during upload:
64
-
65
-
1. Client can retrieve the last successful offset using HEAD request
66
-
2. Resume upload from that offset using PATCH request
67
-
3. No data needs to be re-uploaded
14
+
<Callouttype="info">
15
+
Resumable uploads allow clients to pause and resume file uploads, making the process more resilient to network issues and providing a better user experience.
16
+
</Callout>
17
+
18
+
## Key Components
19
+
20
+
<Steps>
21
+
### Upstash Redis
22
+
A serverless Redis database used for storing upload metadata and managing upload state.
23
+
24
+
### AWS S3 Multipart Uploads
25
+
Enables large file uploads to be split into smaller parts, uploaded independently, and then reassembled.
26
+
27
+
### TUS Protocol
28
+
An open protocol for resumable file uploads, ensuring consistency and interoperability.
29
+
</Steps>
30
+
31
+
## Features
32
+
33
+
### Upstash Redis Integration
34
+
- Serverless and globally distributed
35
+
- Low-latency access to upload metadata
36
+
- Automatic scaling and high availability
37
+
38
+
### AWS S3 Multipart Upload
39
+
- Support for large file uploads (up to 5TB)
40
+
- Parallel upload of file parts
41
+
- Ability to pause and resume uploads
42
+
43
+
### TUS Protocol Implementation
44
+
- Standardized resumable upload process
45
+
- Cross-platform compatibility
46
+
- Built-in error handling and recovery
47
+
48
+
## How It Works
49
+
50
+
<Callouttype="warning">
51
+
Understanding the upload process is crucial for optimizing your application's performance and reliability.
52
+
</Callout>
53
+
54
+
1.**Initiate Upload**
55
+
- Client sends a POST request with file metadata
56
+
- Server creates an entry in Upstash Redis and initiates S3 multipart upload
57
+
58
+
2.**Upload Chunks**
59
+
- Client sends file chunks using PATCH requests
60
+
- Server uploads parts to S3 and updates progress in Upstash Redis
61
+
62
+
3.**Resume Upload**
63
+
- Client can retrieve upload offset using HEAD request
64
+
- Server fetches current state from Upstash Redis
65
+
66
+
4.**Complete Upload**
67
+
- After all chunks are uploaded, server completes S3 multipart upload
68
+
- Upload metadata is cleaned up from Upstash Redis
69
+
70
+
## Best Practices
71
+
72
+
1.**Optimal Chunk Size**
73
+
- Balance between network efficiency and memory usage
74
+
- Typically between 5MB to 15MB per chunk
75
+
76
+
2.**Error Handling**
77
+
- Implement exponential backoff for retries
78
+
- Handle network disconnections gracefully
79
+
80
+
3.**Progress Tracking**
81
+
- Utilize Upstash Redis for real-time progress updates
82
+
- Implement client-side progress bars for better UX
HEAD /files/24e533e02ec3bc40c387f1a0e460e216 HTTP/1.1
119
+
Host: api.tusflow.com
120
+
Tus-Resumable: 1.0.0
121
+
122
+
HTTP/1.1 200 OK
123
+
Upload-Offset: 5242880
124
+
Upload-Length: 100000000
125
+
Tus-Resumable: 1.0.0
126
+
```
127
+
</Tab>
128
+
</Tabs>
129
+
130
+
By leveraging Upstash Redis, AWS S3 multipart uploads, and the TUS protocol, Tusflow provides a powerful and flexible resumable upload solution that can handle large files, network interruptions, and high concurrency with ease.
0 commit comments