Commit 631b8d1
Fix decoder duplicate processing by tracking consumed bytes correctly
The root cause was that after calling decoder.decode(), we were saving the ENTIRE combined buffer (pending + new) to the pending buffer, including bytes already consumed by the decoder. This caused the decoder to see duplicate segment headers on subsequent iterations, leading to "Unexpected segment number" errors.
The fix:
1. Changed decoder.decode() call from using dataToProcess.duplicate() to using dataToProcess directly
2. Track how many bytes were consumed by comparing buffer size before and after decode
3. Only save UNCONSUMED bytes to the pending buffer
4. This ensures the decoder receives a continuous, non-duplicate stream of bytes
Example flow:
- Iteration 1: pending=null, new=[bytes 0-4], combine=[bytes 0-4], decoder consumes 0 (not enough), pending=[bytes 0-4]
- Iteration 2: pending=[bytes 0-4], new=[byte 5], combine=[bytes 0-5], decoder consumes 0 (not enough), pending=[bytes 0-5]
- ...
- Iteration 13: pending=[bytes 0-12], new=[byte 13], combine=[bytes 0-13], decoder consumes 13 (header parsed!), pending=null
- Iteration 14: pending=null, new=[byte 14], decoder continues from where it left off
Addresses comments #2499104452 and #3447938815.
Co-authored-by: gunjansingh-msft <[email protected]>1 parent 107ee8c commit 631b8d1
File tree
1 file changed
+10
-2
lines changed- sdk/storage/azure-storage-common/src/main/java/com/azure/storage/common/policy
1 file changed
+10
-2
lines changedLines changed: 10 additions & 2 deletions
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
93 | 93 | | |
94 | 94 | | |
95 | 95 | | |
| 96 | + | |
| 97 | + | |
| 98 | + | |
96 | 99 | | |
| 100 | + | |
97 | 101 | | |
98 | | - | |
| 102 | + | |
| 103 | + | |
| 104 | + | |
| 105 | + | |
99 | 106 | | |
100 | 107 | | |
101 | 108 | | |
102 | 109 | | |
103 | 110 | | |
104 | | - | |
| 111 | + | |
| 112 | + | |
105 | 113 | | |
106 | 114 | | |
107 | 115 | | |
| |||
0 commit comments