Commit 967a81e
authored
fix: use min instead of max when capping write buffer size to Int range (#3914)
COMET_SHUFFLE_WRITE_BUFFER_SIZE is a Long (bytesConf) but the protobuf
field is int32, so the value must be capped at Int.MaxValue. The code
used .max(Int.MaxValue) which always returns Int.MaxValue (~2GB)
regardless of the configured value. Should be .min(Int.MaxValue) to
preserve smaller values while capping at the Int range.1 parent fc03f7d commit 967a81e
File tree
1 file changed
+1
-1
lines changed- spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle
1 file changed
+1
-1
lines changedLines changed: 1 addition & 1 deletion
| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
191 | 191 | | |
192 | 192 | | |
193 | 193 | | |
194 | | - | |
| 194 | + | |
195 | 195 | | |
196 | 196 | | |
197 | 197 | | |
| |||
0 commit comments