Skip to content

test: cover redis listener batch isolation when one message fails DB insertion#263

Merged
alexgarel merged 7 commits intofix-redis-listener-retry-itemsfrom
copilot/sub-pr-262
Feb 27, 2026
Merged

test: cover redis listener batch isolation when one message fails DB insertion#263
alexgarel merged 7 commits intofix-redis-listener-retry-itemsfrom
copilot/sub-pr-262

Conversation

Copy link

Copilot AI commented Feb 27, 2026

  • Inspect current redis listener implementation and tests for the three new review comments
  • Check GitHub Actions workflow runs/logs (base branch first, then current branch) for relevant failures
  • Apply minimal code changes so splitting only applies to original response chunks
  • Update test to assert failed item is kept in global retry queue and stabilize timing with patched retry interval
  • Run targeted tests for query/events_test.py related to listener retries/splitting
  • Run code review and CodeQL checks, address findings, then reply to all new review comments with commit hash

💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.

Copilot AI and others added 2 commits February 27, 2026 15:29
Co-authored-by: alexgarel <144788+alexgarel@users.noreply.github.com>
Co-authored-by: alexgarel <144788+alexgarel@users.noreply.github.com>
Copilot AI changed the title [WIP] Fix better error handling in redis_listener test: cover redis listener batch isolation when one message fails DB insertion Feb 27, 2026
query/events.py Outdated
await messages_received(transaction, chunk)
# Each message is a tuple of the message id followed by a dict that is the payload
last_message_id = response[0][1][-1][0]
last_message_id = chunk[0][1][-1][0]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hum, we should do that only for original response not for the retried items.

Can you fix it @copilot ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in aa8d05a: last_message_id persistence is now applied only for chunks coming from the original Redis response, not for retried items. No UI changes in this PR.

Comment on lines +130 to +133
def fail_for_one_message(_transaction, streams):
messages = streams[0][1]
if any(message[0] == failing_message_id for message in messages):
raise Exception("insertion failed")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@copilot
Very good, but now that I made the items_to_retry a global of events, can we assert at the end that it contains this item ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in aa8d05a: the test now asserts the failed message is present in events.items_to_retry at the end of processing. No UI changes in this PR.


assert set_id.called
stream_message_counts = [len(call.args[1][0][1]) for call in messages_received.call_args_list]
assert 5 in stream_message_counts
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think depending on the time it takes to runs, it might be more than 5 times, isn't it ?

So maybe the best option is to patch events.get_retry_interval to return a very big time (like 3600) in any case. So we are sure there won't be any real retry.

and at the end of the test, we can check the scheduled retry date is in this range.
@copilot

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in aa8d05a: the test patches events.get_retry_interval to a large fixed value and checks the scheduled retry timestamp is within the expected range. No UI changes in this PR.

Copilot AI and others added 3 commits February 27, 2026 16:42
Co-authored-by: alexgarel <144788+alexgarel@users.noreply.github.com>
Co-authored-by: alexgarel <144788+alexgarel@users.noreply.github.com>
Co-authored-by: alexgarel <144788+alexgarel@users.noreply.github.com>
Copy link
Member

@alexgarel alexgarel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great !

@alexgarel alexgarel marked this pull request as ready for review February 27, 2026 17:36
@alexgarel alexgarel requested a review from a team as a code owner February 27, 2026 17:36
@alexgarel alexgarel merged commit c666342 into fix-redis-listener-retry-items Feb 27, 2026
4 checks passed
@alexgarel alexgarel deleted the copilot/sub-pr-262 branch February 27, 2026 17:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants