Skip to content

Add max retry checks for FetchNotificationsWorker#6219

Open
jmartinesp wants to merge 3 commits intodevelopfrom
misc/add-max-retry-to-notification-fetching
Open

Add max retry checks for FetchNotificationsWorker#6219
jmartinesp wants to merge 3 commits intodevelopfrom
misc/add-max-retry-to-notification-fetching

Conversation

@jmartinesp
Copy link
Member

Content

Add a maximum of 3 retries for fetching notifications with WorkManager.

Motivation and context

Not having this can potentially result in a work request being re-scheduled indefinitely.

Part of #6209

Tests

I don't think this can be easily tested.

Tested devices

  • Physical
  • Emulator
  • OS version(s): 16

Checklist

  • Changes have been tested on an Android device or Android emulator with API 24
  • UI change has been tested on both light and dark themes
  • Accessibility has been taken into account. See https://github.com/element-hq/element-x-android/blob/develop/CONTRIBUTING.md#accessibility
  • Pull request is based on the develop branch
  • Pull request title will be used in the release note, it clearly define what will change for the user
  • Pull request includes screenshots or videos if containing UI changes
  • You've made a self review of your PR

@jmartinesp jmartinesp requested a review from a team as a code owner February 18, 2026 10:13
@jmartinesp jmartinesp requested review from bmarty and removed request for a team February 18, 2026 10:13
@jmartinesp jmartinesp added the PR-Misc For other changes label Feb 18, 2026
@github-actions
Copy link
Contributor

github-actions bot commented Feb 18, 2026

📱 Scan the QR code below to install the build (arm64 only) for this PR.
QR code
If you can't scan the QR code you can install the build via this link: https://i.diawi.com/FgcsDY

@codecov
Copy link

codecov bot commented Feb 18, 2026

Codecov Report

❌ Patch coverage is 59.32203% with 24 lines in your changes missing coverage. Please review.
✅ Project coverage is 81.40%. Comparing base (b23e2a8) to head (b3f43ad).
⚠️ Report is 19 commits behind head on develop.

Files with missing lines Patch % Lines
.../push/impl/workmanager/FetchNotificationsWorker.kt 60.00% 17 Missing and 5 partials ⚠️
.../workmanager/SyncNotificationWorkManagerRequest.kt 50.00% 1 Missing and 1 partial ⚠️
Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #6219      +/-   ##
===========================================
- Coverage    81.42%   81.40%   -0.02%     
===========================================
  Files         2570     2570              
  Lines        69778    69826      +48     
  Branches      8950     8959       +9     
===========================================
+ Hits         56817    56844      +27     
- Misses        9640     9656      +16     
- Partials      3321     3326       +5     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Comment on lines 66 to 72
if (workerParams.runAttemptCount < MAX_RETRY_ATTEMPTS) {
Timber.tag(TAG).w("No network, retrying later")
return@withContext Result.retry()
} else {
Timber.tag(TAG).w("No network available and reached max retry attempts (${workerParams.runAttemptCount}/$MAX_RETRY_ATTEMPTS)")
return@withContext Result.failure()
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe if there is no network we should not count as a retry?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we can make an exception just for this case, to be honest 🫤 .

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we could let the worker wait for "network" instead of retrying?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think the workers can be alive in background for long. We could try, but I'm sure that would have some kind of penalty with the future scheduling.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we can make the retry backoff exponential instead of linear: my thoughts on this was to have it linear so it's retried after a short while, but I was thinking about a temporary issue in the HS, not the connection failing.

…initely

Also, re-schedule those requests that failed because of a network connection hiccup
@sonarqubecloud
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

PR-Misc For other changes

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants