Skip to content

feat: Add redis backend for Autopush#1039

Merged
jrconlin merged 15 commits intomasterfrom
feat/PUSH-587_redis
Jan 9, 2026
Merged

feat: Add redis backend for Autopush#1039
jrconlin merged 15 commits intomasterfrom
feat/PUSH-587_redis

Conversation

@jrconlin
Copy link
Member

@jrconlin jrconlin commented Dec 10, 2025

This PR is taken from #813, which adds a Redis backend data store. The
original patch was written by @p1gp1g.

Note that this patch was originally created for small, stand-alone
server uses, and has not been fully load tested.

Closes PUSH-587

This PR is taken from #813, which adds a Redis backend data store. The
original patch was written by [@p1gp1g](https://github.com/p1gp1g).

Note that this patch was originally created for small, stand-alone
server uses, and has not been fully load tested.

Closes PUSH-587
@jrconlin jrconlin marked this pull request as ready for review December 10, 2025 23:59
@jrconlin jrconlin requested a review from Copilot December 11, 2025 00:00
Copy link

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds Redis as an alternative backend storage option for Autopush, based on original work by @p1gp1g. The implementation provides a complete Redis-based data store for notification and routing information, intended for small, stand-alone server deployments.

Key Changes:

  • Implements a new Redis client (RedisClientImpl) with full DbClient trait support for user management, channel subscriptions, and message storage
  • Updates build system and feature flags to support both BigTable and Redis backends
  • Upgrades several dependencies (actix-web, sentry, uuid, ctor, mockall, hyper, prometheus-client) to newer versions

Reviewed changes

Copilot reviewed 21 out of 22 changed files in this pull request and generated 21 comments.

Show a summary per file
File Description
autopush-common/src/db/redis/redis_client/mod.rs Core Redis client implementation with connection management, key generation, and DbClient trait methods
autopush-common/src/db/redis/mod.rs Redis settings configuration and deserialization
autopush-common/src/db/redis/redis_client/error.rs Error handling conversions for serde_json errors
autopush-common/src/db/routing.rs Adds Redis to StorageType enum and routing logic
autopush-common/src/db/models.rs Adds feature gates to limit RangeKey to bigtable-only usage
autopush-common/src/db/mod.rs Integrates Redis StorageType at the module level
autopush-common/src/notification.rs Removes skip_serializing from timestamp field for Redis compatibility
autopush-common/src/db/error.rs Adds RedisError variant to DbError enum
autopush-common/src/db/bigtable/bigtable_client/mod.rs Updates import path for RangeKey
autopush-common/build.rs Updates build check to accept either bigtable or redis features
autopush-common/Cargo.toml Adds redis feature flag and updates prometheus-client version
autoendpoint/src/server.rs Adds Redis client instantiation in server initialization
autoendpoint/Cargo.toml Adds redis feature flag
autoconnect/autoconnect-settings/src/lib.rs Adds Redis test settings and enables unsafe test for env vars
autoconnect/autoconnect-settings/src/app_state.rs Adds Redis client initialization in app state
autoconnect/autoconnect-settings/Cargo.toml Adds redis and unsafe feature flags
autoconnect/Cargo.toml Adds redis and unsafe features to workspace
Cargo.toml Updates dependencies and removes bigtable from default autopush_common features
Cargo.lock Updates lock file with new dependency versions
Dockerfile Adds BUILD_ARGS parameter for conditional feature builds
Makefile Adds docker-dev-build target
.devcontainer/devcontainer.json Adds cmake and INDOCKER environment variable

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

* Only use `secs` since converting between secs and millis was causing
  potential errors
@jrconlin jrconlin requested review from a team and gruberb December 18, 2025 22:50
Copy link
Member

@gruberb gruberb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a view comments. At least the creation of the connections can cause a race condition, so multiple connections could be created at once I believe.

With the (Trixie+) container, it would be useful to be able to specify
the emulator or test redis hosts instead of "localhost". This reuses
a couple of existing environment variables `BIGTABLE_EMULATOR_HOST` and
`REDIS_HOST`. Be sure to specify the IP and Port (e.g.
`export BIGTABLE_EMULATOR_HOST=192.168.0.255:8080`)
@jrconlin jrconlin requested a review from gruberb January 7, 2026 22:23
Copy link
Member

@gruberb gruberb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a view more comments! Sorry, didn't catch them earlier. It's a HUGE PR :)

/// NOTE: There is some, very small, potential risk that a desktop client that can
/// somehow remain connected the duration of MAX_ROUTER_TTL, may be dropped as not being
/// "lively".
async fn update_user(&self, user: &mut User) -> DbResult<bool> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be a millisecnd problem, but technically, if a User has a mobile and desktop app open, network issues etc, we can get duplicated HELLO messages at roughly the same time. So we could have two writes, and then we could send notification out to a stale connection.

Redis I believe doesn't have a form of lock, just a WATCH and UNWATCH. I don't know if we want to introduce this here (we need a loop, it's not a whole lot more code, maybe 5-10 lines). But we should leave a comment that this can happen? Might be one of these nasty bugs which is really hard to find later.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure that this is a problem.
The desktop app and the mobile app should have two different UAIDs. What can happen is that a UAID could do a HELLO, then the connection is reset (and we don't get the websocket report), and the same UAID tries again. This is a VERY rare event (so much so that we stopped tracking it because after 5 years we never once got a hit).

#[serde(rename = "channelID")]
pub channel_id: Uuid,
pub version: String,
pub timestamp: u64,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was skipped before. Does this cause issues?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't, but there's more to it.
Since the prior storage systems built the notifications artisanally, there was nothing that just dumped the notification as straight JSON. The original author decided that they would just re-use the Notification struct and make timestamp a real field, since it would be ignored by the UserAgent.
That's fine for small systems, but isn't a great idea, since there may be additional information that could differ from what we send out.
I'm going to create a generic StorableNotification that should be used in this case.

* Fix message purge to use full redis key path
* Create semi-unique test id to ensure clean runs
* Introduce `StorableNotification` as a JSON formatter for
  `Notification` (since we may store different values than we transmit)
@jrconlin jrconlin requested a review from gruberb January 9, 2026 16:13
Copy link
Member

@gruberb gruberb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 LGTM! One thing as a comment, not exactly sure how severe it is! We can also land a second PR with a patch after this.

};
Ok(self
.conn
.get_or_try_init(|| async {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This makes me think: What if the connection breaks after a while, have we a way to recover?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, the advise is to use a connection pool for this: https://docs.rs/redis/latest/redis/#connection-pooling.

I'll add that later.

@jrconlin jrconlin merged commit 33398b4 into master Jan 9, 2026
3 checks passed
@jrconlin jrconlin deleted the feat/PUSH-587_redis branch January 9, 2026 18:42
@p1gp1g p1gp1g mentioned this pull request Jan 11, 2026
@wrenix
Copy link

wrenix commented Jan 14, 2026

I miss documentation for configuration #1058

@jrconlin jrconlin mentioned this pull request Jan 29, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants