-
Notifications
You must be signed in to change notification settings - Fork 47
feat: Redis pool #609
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Redis pool #609
Conversation
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughThis PR introduces deadpool-redis connection pooling throughout the codebase, replacing direct Redis ConnectionManager usage. Changes include adding the deadpool-redis dependency, configuring pool size and timeout settings, refactoring all repository implementations to obtain connections from a pool with centralized error handling, and updating documentation. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 4❌ Failed checks (3 warnings, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
src/repositories/notification/notification_redis.rs (1)
97-106: Unconditional warning log even when no failures occur.Line 101 logs
failed_idsatwarn!level unconditionally, even when the vector is empty. This creates noise in logs during normal operation.Proposed fix
if failed_count > 0 { warn!(failed_count = %failed_count, total_count = %ids.len(), "failed to deserialize notifications in batch"); + warn!(failed_ids = ?failed_ids, "failed to deserialize notifications"); } - warn!(failed_ids = ?failed_ids, "failed to deserialize notifications"); - debug!(count = %notifications.len(), "successfully fetched notifications");src/repositories/api_key/api_key_redis.rs (2)
78-80: Incorrect entity name in warning message.The warning message references "Plugin" but this is the api_key repository. This appears to be a copy-paste error from
plugin_redis.rs.Proposed fix
None => { - warn!("Plugin {} not found in batch fetch", ids[i]); + warn!("API key {} not found in batch fetch", ids[i]); }
293-344: Copy-paste errors: Variable names and log messages reference "plugin" instead of "api_key".The
has_entriesanddrop_all_entriesmethods contain multiple references to "plugin" that should be "api_key":
- Variable names:
plugin_list_key,plugin_ids,plugin_key- Log messages: "Checking if plugin entries exist", "Plugin entries exist", "Dropping all plugin entries", etc.
This affects code clarity and will make log-based debugging confusing.
Proposed fix for has_entries
async fn has_entries(&self) -> Result<bool, RepositoryError> { let mut conn = self.get_connection(&self.pool, "has_entries").await?; - let plugin_list_key = self.api_key_list_key(); + let api_key_list_key = self.api_key_list_key(); - debug!("Checking if plugin entries exist"); + debug!("Checking if api key entries exist"); let exists: bool = conn - .exists(&plugin_list_key) + .exists(&api_key_list_key) .await .map_err(|e| self.map_redis_error(e, "has_entries_check"))?; - debug!("Plugin entries exist: {}", exists); + debug!("API key entries exist: {}", exists); Ok(exists) }Proposed fix for drop_all_entries
async fn drop_all_entries(&self) -> Result<(), RepositoryError> { let mut conn = self.get_connection(&self.pool, "drop_all_entries").await?; - let plugin_list_key = self.api_key_list_key(); + let api_key_list_key = self.api_key_list_key(); - debug!("Dropping all plugin entries"); + debug!("Dropping all api key entries"); // Get all plugin IDs first - let plugin_ids: Vec<String> = conn - .smembers(&plugin_list_key) + let api_key_ids: Vec<String> = conn + .smembers(&api_key_list_key) .await .map_err(|e| self.map_redis_error(e, "drop_all_entries_get_ids"))?; - if plugin_ids.is_empty() { - debug!("No plugin entries to drop"); + if api_key_ids.is_empty() { + debug!("No api key entries to drop"); return Ok(()); } // Use pipeline for atomic operations let mut pipe = redis::pipe(); pipe.atomic(); // Delete all individual plugin entries - for plugin_id in &plugin_ids { - let plugin_key = self.api_key_key(plugin_id); - pipe.del(&plugin_key); + for api_key_id in &api_key_ids { + let key = self.api_key_key(api_key_id); + pipe.del(&key); } // Delete the plugin list key - pipe.del(&plugin_list_key); + pipe.del(&api_key_list_key); pipe.exec_async(&mut conn) .await .map_err(|e| self.map_redis_error(e, "drop_all_entries_pipeline"))?; - debug!("Dropped {} plugin entries", plugin_ids.len()); + debug!("Dropped {} api key entries", api_key_ids.len()); Ok(()) }src/repositories/transaction/transaction_redis.rs (1)
157-191: Variable shadowing causes loss of failed IDs from relayer lookup phase.At line 159,
failed_idsis re-declared, shadowing thefailed_idsvariable declared at line 128 and populated at line 137. This means transaction IDs that failed the relayer lookup (no reverse mapping found) are lost in the finalBatchRetrievalResult.Proposed fix
let mut transactions = Vec::new(); let mut failed_count = 0; - let mut failed_ids = Vec::new(); for (i, value) in values.into_iter().enumerate() { match value { Some(json) => { match self.deserialize_entity::<TransactionRepoModel>( &json, &valid_ids[i], "transaction", ) { Ok(tx) => transactions.push(tx), Err(e) => { failed_count += 1; error!(tx_id = %valid_ids[i], error = %e, "failed to deserialize transaction"); + failed_ids.push(valid_ids[i].clone()); // Continue processing other transactions } } } None => { warn!(tx_id = %valid_ids[i], "transaction not found in batch fetch"); failed_ids.push(valid_ids[i].clone()); } } }
🤖 Fix all issues with AI agents
In @docs/configuration/index.mdx:
- Around line 62-63: Update the docs table so the REDIS_POOL_TIMEOUT_MS default
is 10000 to match the actual default used in the server configuration (the code
currently uses 10000), i.e., change the documented default value from 15000 to
10000 for REDIS_POOL_TIMEOUT_MS.
In @docs/configuration/storage.mdx:
- Around line 97-98: The docs list REDIS_POOL_TIMEOUT_MS as 15000 but the code
default is 10000; update the documentation entry for REDIS_POOL_TIMEOUT_MS to
the actual default used in code (10000 ms) so the table value and any
descriptive text match the constant/default defined for REDIS_POOL_TIMEOUT_MS in
the server configuration (where the default is currently set to 10000).
In @src/config/server_config.rs:
- Around line 254-260: The doc comment for get_redis_pool_max_size() is out of
sync: it says "default (100)" but the function actually uses 500 as the default;
update the triple-slash comment above pub fn get_redis_pool_max_size() to state
the correct default (500) and mention REDIS_POOL_MAX_SIZE so the comment matches
the actual unwrap_or_else/unwrap_or values used in the function.
- Around line 262-268: The doc comment, the get_redis_pool_timeout_ms() function
default, and external docs disagree; pick the intended default (change to
15000ms if that is correct) and make them consistent: update the function
get_redis_pool_timeout_ms() to return 15000 as the fallback string and parse
fallback, update the triple-slash doc comment above the function to state
default (15000ms), and update the documentation files (index.mdx and
storage.mdx) to reflect 15000 as the default Redis pool timeout.
- Around line 55-58: Tests are missing for the new Redis pool getters
get_redis_pool_max_size() and get_redis_pool_timeout_ms(); add unit tests
mirroring the existing patterns (e.g., test_individual_getters_with_defaults()
and test_individual_getters_with_custom_values()) that assert defaults are
returned when fields are not set and custom values are returned when provided,
using the same helper setup/utilities as other tests to construct ServerConfig
and call get_redis_pool_max_size() and get_redis_pool_timeout_ms().
🧹 Nitpick comments (5)
src/utils/redis.rs (1)
42-47: Consider logging successful pool initialization.Adding a log statement after successful pool validation would help with operational visibility.
📝 Suggested enhancement
// Verify the pool is working by getting a connection let conn = pool .get() .await .map_err(|e| eyre::eyre!("Failed to get initial Redis connection: {}", e))?; drop(conn); + tracing::info!( + max_size = config.redis_pool_max_size, + "Redis connection pool initialized successfully" + ); + Ok(Arc::new(pool))src/repositories/notification/notification_redis.rs (1)
455-461: Test pool creation pattern differs from other repositories.Tests here use
cfg.create_pool(Some(deadpool_redis::Runtime::Tokio1))while other repositories liketransaction_redis.rsandrelayer_redis.rsuse the builder pattern with explicitmax_sizeconfiguration. Consider using a consistent pattern across all test files for maintainability and to ensure consistent pool sizing during tests.src/repositories/transaction_counter/transaction_counter_redis.rs (1)
213-220: Consider aligning test pool creation with other test files.This test setup uses
create_pool(Some(Runtime::Tokio1))whilesigner_redis.rsuses the builder pattern with explicitmax_size(16). Both work, but for consistency across the codebase, consider using the same pattern.♻️ Optional: Use builder pattern for consistency
async fn setup_test_repo() -> RedisTransactionCounter { let redis_url = std::env::var("REDIS_URL").unwrap_or_else(|_| "redis://127.0.0.1:6379".to_string()); let cfg = deadpool_redis::Config::from_url(&redis_url); - let pool = cfg - .create_pool(Some(deadpool_redis::Runtime::Tokio1)) - .expect("Failed to create Redis pool"); + let pool = cfg + .builder() + .expect("Failed to create pool builder") + .max_size(16) + .runtime(deadpool_redis::Runtime::Tokio1) + .build() + .expect("Failed to build Redis pool"); RedisTransactionCounter::new(Arc::new(pool), "test_counter".to_string()) .expect("Failed to create Redis transaction counter") }src/repositories/signer/signer_redis.rs (2)
473-490: Consider extracting pool creation to reduce duplication.The pool creation code in
test_new_repository_empty_prefix_failsduplicates the logic insetup_test_repo. Consider extracting pool creation to a helper function.♻️ Optional: Extract pool creation helper
+ async fn create_test_pool() -> deadpool_redis::Pool { + let redis_url = + std::env::var("REDIS_URL").unwrap_or_else(|_| "redis://127.0.0.1:6379/".to_string()); + let cfg = Config::from_url(&redis_url); + cfg.builder() + .expect("Failed to create pool builder") + .max_size(16) + .runtime(Runtime::Tokio1) + .build() + .expect("Failed to build Redis pool") + } + async fn setup_test_repo() -> RedisSignerRepository { - let redis_url = - std::env::var("REDIS_URL").unwrap_or_else(|_| "redis://127.0.0.1:6379/".to_string()); - let cfg = Config::from_url(&redis_url); - let pool = cfg - .builder() - .expect("Failed to create pool builder") - .max_size(16) - .runtime(Runtime::Tokio1) - .build() - .expect("Failed to build Redis pool"); - + let pool = create_test_pool().await; RedisSignerRepository::new(Arc::new(pool), "test".to_string()) .expect("Failed to create repository") } #[tokio::test] #[ignore = "Requires active Redis instance"] async fn test_new_repository_empty_prefix_fails() { - let redis_url = - std::env::var("REDIS_URL").unwrap_or_else(|_| "redis://127.0.0.1:6379/".to_string()); - let cfg = Config::from_url(&redis_url); - let pool = cfg - .builder() - .expect("Failed to create pool builder") - .max_size(16) - .runtime(Runtime::Tokio1) - .build() - .expect("Failed to build Redis pool"); - + let pool = create_test_pool().await; let result = RedisSignerRepository::new(Arc::new(pool), "".to_string());
363-368: Minor: Unnecessary.clone()onsigners.results.
signers.resultsis already owned, so the.clone()on line 364 is unnecessary since theBatchRetrievalResultis consumed.♻️ Remove unnecessary clone
Ok(PaginatedResult { - items: signers.results.clone(), + items: signers.results, total, page: query.page, per_page: query.per_page, })Based on coding guidelines: "Avoid unnecessary .clone() calls".
📜 Review details
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
Cargo.lockis excluded by!**/*.lock
📒 Files selected for processing (23)
Cargo.tomldocs/configuration/index.mdxdocs/configuration/storage.mdxsrc/config/server_config.rssrc/repositories/api_key/api_key_redis.rssrc/repositories/api_key/mod.rssrc/repositories/network/mod.rssrc/repositories/network/network_redis.rssrc/repositories/notification/mod.rssrc/repositories/notification/notification_redis.rssrc/repositories/plugin/mod.rssrc/repositories/plugin/plugin_redis.rssrc/repositories/redis_base.rssrc/repositories/relayer/mod.rssrc/repositories/relayer/relayer_redis.rssrc/repositories/signer/mod.rssrc/repositories/signer/signer_redis.rssrc/repositories/transaction/mod.rssrc/repositories/transaction/transaction_redis.rssrc/repositories/transaction_counter/mod.rssrc/repositories/transaction_counter/transaction_counter_redis.rssrc/utils/mocks.rssrc/utils/redis.rs
🧰 Additional context used
📓 Path-based instructions (1)
**/*.rs
📄 CodeRabbit inference engine (.cursor/rules/rust_standards.mdc)
**/*.rs: Follow official Rust style guidelines using rustfmt (edition = 2021, max_width = 100)
Follow Rust naming conventions: snake_case for functions/variables, PascalCase for types, SCREAMING_SNAKE_CASE for constants and statics
Order imports alphabetically and group by: std, external crates, local crates
Include relevant doc comments (///) on public functions, structs, and modules. Use comments for 'why', not 'what'. Avoid redundant doc comments
Document lifetime parameters when they're not obvious
Avoid unsafe code unless absolutely necessary; justify its use in comments
Write idiomatic Rust: Prefer Result over panic, use ? operator for error propagation
Avoid unwrap; handle errors explicitly with Result and custom Error types for all async operations
Prefer header imports over function-level imports of dependencies
Prefer borrowing over cloning when possible
Use &str instead of &String for function parameters
Use &[T] instead of &Vec for function parameters
Avoid unnecessary .clone() calls
Use Vec::with_capacity() when size is known in advance
Use explicit lifetimes only when necessary; infer where possible
Always use Tokio runtime for async code. Await futures eagerly; avoid blocking calls in async contexts
Streams: Use futures::StreamExt for processing streams efficiently
Always use serde for JSON serialization/deserialization with #[derive(Serialize, Deserialize)]
Use type aliases for complex types to improve readability
Implement common traits (Debug, Clone, PartialEq) where appropriate, using derive macros when possible
Implement Display for user-facing types
Prefer defining traits when implementing services to make mocking and testing easier
For tests, prefer existing utils for creating mocks instead of duplicating code
Use assert_eq! and assert! macros appropriately
Keep tests minimal, deterministic, and fast
Use tracing for structured logging, e.g., tracing::info! for request and error logs
When optimizing, prefer clarity first, then performance
M...
Files:
src/utils/redis.rssrc/repositories/notification/mod.rssrc/utils/mocks.rssrc/repositories/relayer/mod.rssrc/repositories/signer/signer_redis.rssrc/repositories/transaction_counter/transaction_counter_redis.rssrc/repositories/api_key/mod.rssrc/repositories/transaction_counter/mod.rssrc/repositories/redis_base.rssrc/repositories/transaction/mod.rssrc/repositories/relayer/relayer_redis.rssrc/repositories/plugin/mod.rssrc/repositories/notification/notification_redis.rssrc/repositories/api_key/api_key_redis.rssrc/config/server_config.rssrc/repositories/transaction/transaction_redis.rssrc/repositories/signer/mod.rssrc/repositories/plugin/plugin_redis.rssrc/repositories/network/mod.rssrc/repositories/network/network_redis.rs
🧠 Learnings (4)
📚 Learning: 2025-12-02T11:22:31.387Z
Learnt from: CR
Repo: OpenZeppelin/openzeppelin-relayer PR: 0
File: .cursor/rules/rust_standards.mdc:0-0
Timestamp: 2025-12-02T11:22:31.387Z
Learning: Applies to **/*.rs : Keep tests minimal, deterministic, and fast
Applied to files:
src/repositories/signer/signer_redis.rssrc/repositories/notification/notification_redis.rs
📚 Learning: 2025-12-02T11:22:31.387Z
Learnt from: CR
Repo: OpenZeppelin/openzeppelin-relayer PR: 0
File: .cursor/rules/rust_standards.mdc:0-0
Timestamp: 2025-12-02T11:22:31.387Z
Learning: Applies to **/*.rs : For tests, prefer existing utils for creating mocks instead of duplicating code
Applied to files:
src/repositories/notification/notification_redis.rs
📚 Learning: 2025-12-02T11:22:31.387Z
Learnt from: CR
Repo: OpenZeppelin/openzeppelin-relayer PR: 0
File: .cursor/rules/rust_standards.mdc:0-0
Timestamp: 2025-12-02T11:22:31.387Z
Learning: Applies to **/*.rs : Always use Tokio runtime for async code. Await futures eagerly; avoid blocking calls in async contexts
Applied to files:
src/repositories/notification/notification_redis.rs
📚 Learning: 2025-10-14T09:19:48.698Z
Learnt from: zeljkoX
Repo: OpenZeppelin/openzeppelin-relayer PR: 515
File: src/domain/transaction/evm/evm_transaction.rs:482-505
Timestamp: 2025-10-14T09:19:48.698Z
Learning: The `partial_update` method in the transaction repository has an internal retry mechanism with up to 3 attempts and exponential backoff, which helps mitigate issues where nonce reservation might be lost due to transient DB failures.
Applied to files:
src/repositories/transaction/transaction_redis.rs
🧬 Code graph analysis (15)
src/utils/redis.rs (8)
src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/notification/mod.rs (9)
src/repositories/relayer/mod.rs (1)
new_redis(114-116)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/relayer/mod.rs (4)
src/repositories/transaction/mod.rs (1)
new_redis(140-144)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/signer/signer_redis.rs (5)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)
src/repositories/transaction_counter/transaction_counter_redis.rs (6)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/api_key/mod.rs (11)
src/repositories/relayer/mod.rs (1)
new_redis(114-116)src/repositories/transaction/mod.rs (1)
new_redis(140-144)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)src/models/api_key/repository.rs (1)
new(24-38)
src/repositories/transaction_counter/mod.rs (4)
src/repositories/relayer/mod.rs (1)
new_redis(114-116)src/repositories/transaction/mod.rs (1)
new_redis(140-144)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)
src/repositories/transaction/mod.rs (9)
src/repositories/relayer/mod.rs (1)
new_redis(114-116)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/plugin/mod.rs (12)
src/repositories/api_key/mod.rs (1)
new_redis(65-68)src/repositories/network/mod.rs (1)
new_redis(62-65)src/repositories/notification/mod.rs (1)
new_redis(44-48)src/repositories/relayer/mod.rs (1)
new_redis(114-116)src/repositories/signer/mod.rs (1)
new_redis(54-57)src/repositories/transaction/mod.rs (1)
new_redis(140-144)src/repositories/transaction_counter/mod.rs (1)
new_redis(75-77)src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)
src/repositories/notification/notification_redis.rs (3)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/plugin/plugin_redis.rs (1)
new(25-33)
src/repositories/api_key/api_key_redis.rs (3)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/network/network_redis.rs (1)
new(33-41)
src/repositories/transaction/transaction_redis.rs (7)
src/repositories/transaction_counter/transaction_counter_redis.rs (1)
new(37-45)src/repositories/api_key/api_key_redis.rs (2)
new(25-33)conn(375-375)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/plugin/plugin_redis.rs (2)
new(25-33)conn(352-352)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/signer/mod.rs (1)
src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/plugin/plugin_redis.rs (6)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/api_key/api_key_redis.rs (1)
new(25-33)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)src/repositories/signer/signer_redis.rs (1)
new(25-33)
src/repositories/network/mod.rs (4)
src/repositories/transaction/transaction_redis.rs (1)
new(35-43)src/repositories/network/network_redis.rs (1)
new(33-41)src/repositories/notification/notification_redis.rs (1)
new(25-33)src/repositories/relayer/relayer_redis.rs (1)
new(28-36)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (11)
- GitHub Check: clippy
- GitHub Check: msrv
- GitHub Check: Unit Tests
- GitHub Check: rustfmt
- GitHub Check: Properties Tests
- GitHub Check: Analyze (actions)
- GitHub Check: Analyze (rust)
- GitHub Check: Redirect rules - openzeppelin-relayer
- GitHub Check: Header rules - openzeppelin-relayer
- GitHub Check: Pages changed - openzeppelin-relayer
- GitHub Check: semgrep/ci
🔇 Additional comments (37)
docs/configuration/storage.mdx (1)
119-141: Good addition of pool tuning guidance.The "Connection Pool Tuning" section provides valuable operational guidance for users deploying to production, including AWS ElastiCache-specific monitoring advice.
src/utils/mocks.rs (1)
319-320: LGTM! Test-appropriate pool configuration.Using a smaller pool size (10) and shorter timeout (5000ms) for tests is appropriate and will help keep tests fast.
src/repositories/redis_base.rs (2)
86-112: Well-structured pool error mapping.The
map_pool_errormethod provides comprehensive handling for allPoolErrorvariants with meaningful error messages. The timeout type distinction (Wait/Create/Recycle) is helpful for debugging.
114-133: Good centralized connection acquisition helper.The
get_connectionmethod standardizes connection acquisition across all Redis repositories with consistent error handling.src/utils/redis.rs (1)
24-50: LGTM! Well-implemented Redis pool initialization.The implementation correctly:
- Uses the builder pattern for flexible configuration
- Separates wait timeout (for acquiring from pool) from connection timeouts (for establishing connections)
- Validates the pool with an initial connection test
- Returns
Arc<Pool>for shared ownership across repositoriesCargo.toml (1)
54-55: LGTM! Dependency addition for connection pooling.The
deadpool-rediscrate withrt_tokio_1feature correctly integrates with the existing Tokio runtime. Theconnection-managerfeature onredisis still required byapalis-redisfor its internal connection handling and cannot be removed.src/repositories/transaction/transaction_redis.rs (1)
1288-1307: LGTM!The test setup uses the builder pattern with explicit
max_size(16)configuration, which provides better control over pool behavior during tests. The use of random UUIDs for key prefixes ensures test isolation.src/repositories/network/network_redis.rs (2)
24-41: LGTM!The migration to pool-based connection management follows the established pattern consistently. The constructor properly validates the key_prefix and stores the pool reference.
77-131: LGTM!The
update_indexesmethod properly handles both create and update scenarios, with appropriate cleanup of old indexes when network properties change. The atomic pipeline ensures consistency.src/repositories/relayer/relayer_redis.rs (2)
19-36: LGTM!The struct and constructor migration to pool-based connection management is consistent with the pattern established across all Redis repositories in this PR.
563-577: LGTM!Test setup uses the builder pattern with explicit
max_size(16)configuration, consistent with other repository tests liketransaction_redis.rs.src/repositories/plugin/plugin_redis.rs (5)
7-10: LGTM - Pool import and dependencies are correctly updated.The import change from
redis::aio::ConnectionManagertodeadpool_redis::Poolaligns with the PR's goal of introducing connection pooling.
16-33: LGTM - Struct and constructor changes are consistent with other repositories.The field change to
pool: Arc<Pool>and constructor signature update follow the same pattern used across all other Redis repositories in this PR (e.g.,transaction_redis.rs,network_redis.rs,notification_redis.rs).
53-57: LGTM - Connection parameter type correctly updated.The
get_by_id_with_connectionmethod now properly accepts&mut deadpool_redis::Connection, which is the correct type returned bypool.get().await.
135-141: LGTM - Debug implementation updated to reflect pool field.The debug output correctly masks the pool with
"<Pool>"to avoid exposing internal state, maintaining consistency with the previous approach.
342-358: LGTM - Test setup properly migrated to pool-based approach.The test setup correctly:
- Creates pool via
deadpool_redis::Config::from_url- Uses
Runtime::Tokio1for async compatibility- Cleans up test data before each test run
- Wraps pool in
Arcfor the repository constructorsrc/repositories/transaction_counter/mod.rs (2)
24-24: LGTM - Pool import added correctly.
75-77: LGTM - Constructor signature updated consistently.The
new_redismethod now acceptsArc<Pool>and correctly forwards it toRedisTransactionCounter::new(pool, key_prefix), matching the pattern used in other repository modules.src/repositories/api_key/mod.rs (2)
19-19: LGTM - Pool import correctly added.
65-68: LGTM - Constructor updated to use pool-based initialization.The implementation correctly creates the Redis repository with the pool and wraps it in the storage enum, following the same pattern as other repository modules.
src/repositories/notification/mod.rs (2)
30-30: LGTM - Pool import correctly updated.
44-48: LGTM - Constructor correctly migrated to pool-based approach.The
new_redismethod properly acceptsArc<Pool>and forwards bothpoolandkey_prefixtoRedisNotificationRepository::new, consistent with other repository modules.src/repositories/plugin/mod.rs (2)
32-32: LGTM - Pool import correctly updated.
72-75: LGTM - Constructor updated to use pool-based initialization.The implementation follows the same pattern as other repository storage modules, properly forwarding the pool to
RedisPluginRepository::new.src/repositories/network/mod.rs (2)
22-22: LGTM - Pool import correctly updated.
62-65: LGTM - Constructor correctly migrated to pool-based approach.The
new_redismethod properly acceptsArc<Pool>and creates the Redis repository with the pool, following the established pattern.src/repositories/relayer/mod.rs (2)
36-36: LGTM - Pool import correctly updated.
114-116: LGTM - Constructor correctly migrated to pool-based approach.The
new_redismethod properly acceptsArc<Pool>and forwards it toRedisRelayerRepository::new(pool, key_prefix)?, maintaining consistency with all other repository modules in this PR.src/repositories/signer/mod.rs (1)
39-39: LGTM! Pool-based constructor is consistent with other repositories.The import change to
deadpool_redis::Pooland the updatednew_redisconstructor signature align with the codebase-wide migration pattern seen in other repositories (e.g.,RelayerRepositoryStorage::new_redisatsrc/repositories/relayer/mod.rs:113-115).Also applies to: 54-57
src/repositories/transaction/mod.rs (1)
24-24: LGTM! Consistent pool-based migration.The import and constructor changes follow the established pattern across all Redis-backed repositories. The delegation to
RedisTransactionRepository::new(pool, key_prefix)correctly propagates the pool reference.Also applies to: 140-144
src/repositories/transaction_counter/transaction_counter_redis.rs (2)
12-12: LGTM! Clean migration to pool-based connection management.The struct field, import, and constructor changes correctly implement the pool-based pattern. The validation for empty
key_prefixis consistent with all other Redis repository implementations.Also applies to: 20-24, 36-44
74-74: Connection acquisition pattern is correct.Each operation correctly obtains a connection from the pool via
get_connection, uses it for the Redis command, and the connection is automatically returned to the pool when it goes out of scope. This is the expected usage pattern for deadpool-redis.Also applies to: 105-105, 135-135, 191-191
src/repositories/signer/signer_redis.rs (5)
7-8: LGTM! Pool-based structure and constructor are well implemented.The struct correctly stores
Arc<Pool>and the constructor validates thekey_prefixparameter, matching the pattern across all other Redis repositories in the codebase.Also applies to: 16-20, 24-33
43-76: Helper methods correctly use pooled connections.The
add_to_list,remove_from_list, andget_all_idsmethods properly obtain connections from the pool and use consistent error mapping. The logging at appropriate levels (debug for success, error for failures) follows best practices.
78-139: Batch fetch implementation is efficient and robust.Using
mgetfor batch retrieval is efficient, and the error handling gracefully degrades by trackingfailed_idsseparately rather than failing the entire operation. This is a good pattern for resilience.
150-189: Repository CRUD operations correctly implement pool-based connections.All CRUD operations follow the correct pattern:
- Validate input parameters
- Obtain connection from pool
- Perform existence checks where appropriate
- Execute Redis commands with proper error mapping
- Maintain list consistency (add_to_list/remove_from_list)
Also applies to: 191-220, 222-265, 267-300
391-427: Atomic batch deletion with pipeline is well implemented.Using
pipe.atomic()ensures that all signer entries and the list key are deleted atomically, maintaining data consistency. The early return for empty collections is a good optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR migrates the Redis connection management from a single ConnectionManager to deadpool-redis connection pooling for improved performance and scalability under high load.
Changes:
- Replaced
redis::aio::ConnectionManagerwithdeadpool_redis::Poolthroughout the codebase - Added configurable pool settings:
REDIS_POOL_MAX_SIZE(default: 500) andREDIS_POOL_TIMEOUT_MS(default: 10000ms) - Updated all Redis repositories to use connection pool with proper error handling via new
get_connection()helper method
Reviewed changes
Copilot reviewed 23 out of 24 changed files in this pull request and generated 14 comments.
Show a summary per file
| File | Description |
|---|---|
| src/utils/redis.rs | Replaced ConnectionManager initialization with deadpool Pool configuration including health checks and timeouts |
| src/utils/mocks.rs | Added new pool configuration fields to mock server config |
| src/config/server_config.rs | Added redis_pool_max_size and redis_pool_timeout_ms configuration with getters and tests |
| src/repositories/redis_base.rs | Added map_pool_error() and get_connection() helper methods for consistent pool error handling |
| src/repositories/transaction_counter/* | Updated to use Pool instead of ConnectionManager |
| src/repositories/transaction/* | Updated to use Pool with proper connection acquisition in all methods |
| src/repositories/signer/* | Updated to use Pool and improved error handling with map_redis_error |
| src/repositories/relayer/* | Updated to use Pool instead of ConnectionManager |
| src/repositories/plugin/* | Updated to use Pool instead of ConnectionManager |
| src/repositories/notification/* | Updated to use Pool instead of ConnectionManager |
| src/repositories/network/* | Updated to use Pool instead of ConnectionManager |
| src/repositories/api_key/* | Updated to use Pool instead of ConnectionManager |
| docs/configuration/storage.mdx | Added connection pool tuning guide with AWS ElastiCache recommendations |
| docs/configuration/index.mdx | Added documentation for new pool configuration variables |
| Cargo.toml | Added deadpool-redis dependency v0.22 with rt_tokio_1 feature |
| Cargo.lock | Added deadpool and deadpool-redis dependencies |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
src/repositories/transaction_counter/transaction_counter_redis.rs
Outdated
Show resolved
Hide resolved
Codecov Report❌ Patch coverage is Additional details and impacted files
Flags with carried forward coverage won't be shown. Click here to find out more. @@ Coverage Diff @@
## main #609 +/- ##
==========================================
+ Coverage 92.32% 92.36% +0.03%
==========================================
Files 258 258
Lines 94352 94816 +464
==========================================
+ Hits 87112 87577 +465
+ Misses 7240 7239 -1
🚀 New features to boost your workflow:
|
NicoMolinaOZ
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, good improvement!
|
This PR is updated with reader endpoint support and connection reuse optimisations. |
NicoMolinaOZ
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
dylankilkenny
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just the one comment on transcation counter get method, should it be using the reader pool rather than primary?
Good catch! Thanks! |
Summary
This PR will introduces deadpool-redis package in order to optimise redis connections.
In addition to redis pool optimizations it adds support for reader endpoint.
Testing Process
Checklist
Note
If you are using Relayer in your stack, consider adding your team or organization to our list of Relayer Users in the Wild!
Summary by CodeRabbit
Release Notes
New Features
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.