Skip to content

Commit 967a81e

Browse files
authored
fix: use min instead of max when capping write buffer size to Int range (#3914)
COMET_SHUFFLE_WRITE_BUFFER_SIZE is a Long (bytesConf) but the protobuf field is int32, so the value must be capped at Int.MaxValue. The code used .max(Int.MaxValue) which always returns Int.MaxValue (~2GB) regardless of the configured value. Should be .min(Int.MaxValue) to preserve smaller values while capping at the Int range.
1 parent fc03f7d commit 967a81e

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometNativeShuffleWriter.scala

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -191,7 +191,7 @@ class CometNativeShuffleWriter[K, V](
191191
shuffleWriterBuilder.setCompressionLevel(
192192
CometConf.COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL.get)
193193
shuffleWriterBuilder.setWriteBufferSize(
194-
CometConf.COMET_SHUFFLE_WRITE_BUFFER_SIZE.get().max(Int.MaxValue).toInt)
194+
CometConf.COMET_SHUFFLE_WRITE_BUFFER_SIZE.get().min(Int.MaxValue).toInt)
195195

196196
outputPartitioning match {
197197
case p if isSinglePartitioning(p) =>

0 commit comments

Comments
 (0)