Skip to content

[Issue 1446][Consumer] Fix consumer can't consume resent chunked messages#1464

Open
geniusjoe wants to merge 1 commit intoapache:masterfrom
geniusjoe:bugfix/chunk-with-reconnect
Open

[Issue 1446][Consumer] Fix consumer can't consume resent chunked messages#1464
geniusjoe wants to merge 1 commit intoapache:masterfrom
geniusjoe:bugfix/chunk-with-reconnect

Conversation

@geniusjoe
Copy link
Contributor

Master Issue: #1446
related issue apache/pulsar#21070 and apache/pulsar#21101

Motivation

Current, when the producer resend the chunked message like this:

M1: UUID: 0, ChunkID: 0
M2: UUID: 0, ChunkID: 0 // Resend the first chunk
M3: UUID: 0, ChunkID: 1

When the consumer received the M2, it will find that it's already tracking the UUID:0 chunked messages, and will then discard the message M1 and M2. This will lead to unable to consume the whole chunked message even though it's already persisted in the Pulsar topic.

Here is the code logic:

	if ctx == nil || ctx.chunkedMsgBuffer == nil || chunkID != ctx.lastChunkedMsgID+1 {
		lastChunkedMsgID := -1
		totalChunks := -1
		if ctx != nil {
			lastChunkedMsgID = int(ctx.lastChunkedMsgID)
			totalChunks = int(ctx.totalChunks)
			ctx.chunkedMsgBuffer.Clear()
		}
		pc.log.Warnf(fmt.Sprintf(
			"Received unexpected chunk messageId %s, last-chunk-id %d, chunkId = %d, total-chunks %d",
			msgID.String(), lastChunkedMsgID, chunkID, totalChunks))
		pc.chunkedMsgCtxMap.remove(uuid)
		pc.availablePermits.inc()
		return nil
	}

The bug can be easily reproduced using the testcase TestChunkWithReconnection and TestResendChunkMessages introduced by this PR.

Modifications

The current chunk processing strategy is consistent with the behavior of the Java client:
https://github.com/apache/pulsar/blob/52a4d5ee84fad6af2736376a6fcdd1bc41e7c52f/pulsar-client/src/main/java/org/apache/pulsar/client/impl/ConsumerImpl.java#L1579

When receiving the new duplicated first chunk of a chunked message, the consumer discard the current chunked message context and create a new context to track the following messages. For the case mentioned in Motivation, the M1 will be released and the consumer will assemble M2 and M3 as the chunked message.

Verifying this change

  • Make sure that the change passes the CI checks.

This change added tests and can be verified as follows:
TestChunkWithReconnection
TestResendChunkMessages
TestResendChunkWithAckHoleMessages

Does this pull request potentially affect one of the following parts:

  • Dependencies (does it add or upgrade a dependency): (no)
  • The public API: (no)
  • The schema: (no)
  • The default values of configurations: (no)
  • The wire protocol: (no)

Documentation

  • Does this pull request introduce a new feature? (no)

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a critical bug where consumers could not consume chunked messages when chunks were resent due to reconnections or other failures. The fix aligns the Go client behavior with the Java client by properly handling duplicate chunks.

Changes:

  • Enhanced chunk processing logic to detect and handle duplicate chunks (both corrupted and redelivered)
  • Added proper cleanup of chunk message buffers to prevent memory leaks
  • Added three new test cases to verify the fix works correctly for reconnection scenarios and duplicate chunk handling

Reviewed changes

Copilot reviewed 2 out of 2 changed files in this pull request and generated 4 comments.

File Description
pulsar/consumer_partition.go Implements duplicate chunk detection and handling logic; adds buffer cleanup calls to prevent memory leaks
pulsar/message_chunking_test.go Adds comprehensive tests for chunk reconnection scenarios and duplicate chunk message handling

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@geniusjoe geniusjoe force-pushed the bugfix/chunk-with-reconnect branch from 2be1c8f to 2c96550 Compare January 30, 2026 14:37
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant