-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[management] fix the issue with duplicated peers with the same key #5053
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
📝 WalkthroughWalkthroughAdd a migration to remove duplicate peer keys before creating a unique index on peers.key; adjust index creation for MySQL; remove the GORM index tag from Peer.Key; update SQL fixtures and tests to reflect a unique peers.key and include Peer.Key in test data. Changes
Sequence Diagram(s)sequenceDiagram
actor MigrationRunner
participant DB
participant Logger
MigrationRunner->>DB: Check if table `peers` exists
alt table missing
DB-->>MigrationRunner: Table not found
MigrationRunner->>Logger: Log and skip RemoveDuplicatePeerKeys
else table exists
MigrationRunner->>DB: Query duplicate keys (GROUP BY key HAVING COUNT(*)>1)
DB-->>MigrationRunner: List of duplicate keys
loop for each duplicate key
MigrationRunner->>DB: SELECT id FROM peers WHERE key = X ORDER BY peer_status_last_seen DESC
DB-->>MigrationRunner: Ordered peer ids
MigrationRunner->>DB: DELETE FROM peers WHERE id IN (all but first id)
DB-->>MigrationRunner: Deletion result
MigrationRunner->>Logger: Log deletion summary/warnings
end
MigrationRunner->>DB: Drop old index `idx_peers_key`
DB-->>MigrationRunner: Drop result
MigrationRunner->>DB: Create unique index `idx_peers_key_unique` (MySQL quoting/length handling)
DB-->>MigrationRunner: Create result
MigrationRunner->>Logger: Log completion
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
📜 Recent review detailsConfiguration used: defaults Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)
491-543: Consider wrapping deletion operations in a transaction.The function deletes duplicate peers across multiple iterations without transaction protection. If deletion fails partway through (e.g., due to foreign key constraints or database errors), the database could be left in a partially cleaned state, with some duplicates removed and others remaining.
🔄 Wrap deletion loop in transaction
func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error { if !db.Migrator().HasTable("peers") { log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup") return nil } keyColumn := GetColumnName(db, "key") var duplicates []struct { Key string Count int64 } if err := db.Table("peers"). Select(keyColumn + ", COUNT(*) as count"). Group(keyColumn). Having("COUNT(*) > 1"). Find(&duplicates).Error; err != nil { return fmt.Errorf("find duplicate keys: %w", err) } if len(duplicates) == 0 { return nil } log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates)) + return db.Transaction(func(tx *gorm.DB) error { - for _, dup := range duplicates { + for _, dup := range duplicates { - var peerIDs []string - if err := db.Table("peers"). + var peerIDs []string + if err := tx.Table("peers"). Select("id"). Where(keyColumn+" = ?", dup.Key). Order("peer_status_last_seen DESC"). Pluck("id", &peerIDs).Error; err != nil { return fmt.Errorf("get peers for key: %w", err) - } + } - if len(peerIDs) <= 1 { - continue - } + if len(peerIDs) <= 1 { + continue + } - idsToDelete := peerIDs[1:] - log.WithContext(ctx).Warnf("Deleting %d duplicate peers with key %s...: %v", - len(idsToDelete), dup.Key[:min(8, len(dup.Key))], idsToDelete) + idsToDelete := peerIDs[1:] + log.WithContext(ctx).Warnf("Deleting %d duplicate peers with key %s...: %v", + len(idsToDelete), dup.Key[:min(8, len(dup.Key))], idsToDelete) - if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil { - return fmt.Errorf("delete duplicate peers: %w", err) + if err := tx.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil { + return fmt.Errorf("delete duplicate peers: %w", err) + } } - } - log.WithContext(ctx).Info("Duplicate peer keys cleanup completed") - return nil + log.WithContext(ctx).Info("Duplicate peer keys cleanup completed") + return nil + }) }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (9)
management/server/migration/migration.gomanagement/server/peer/peer.gomanagement/server/store/sql_store_test.gomanagement/server/store/store.gomanagement/server/testdata/extended-store.sqlmanagement/server/testdata/store.sqlmanagement/server/testdata/store_policy_migrate.sqlmanagement/server/testdata/store_with_expired_peers.sqlmanagement/server/testdata/storev1.sql
🧰 Additional context used
🧬 Code graph analysis (1)
management/server/store/store.go (2)
management/server/migration/migration.go (3)
RemoveDuplicatePeerKeys(491-543)DropIndex(362-381)CreateIndexIfNotExists(383-430)management/server/peer/peer.go (1)
Peer(16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
- GitHub Check: CodeQL analysis (go)
- GitHub Check: Agent
- GitHub Check: release_ui
- GitHub Check: FreeBSD Port / Build & Test
- GitHub Check: release
- GitHub Check: Android / Build
- GitHub Check: Client / Unit
- GitHub Check: Build Cache
- GitHub Check: Client / Unit
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: JS / Lint
🔇 Additional comments (14)
management/server/testdata/store_policy_migrate.sql (1)
17-17: LGTM! Index change aligns with unique constraint enforcement.The index has been correctly updated to enforce uniqueness on the peer key column, matching the changes in the Peer model and other test data files.
management/server/store/sql_store_test.go (1)
972-972: LGTM! Test data updated to accommodate unique key constraint.All test peer initializations now include explicit
Keyfield values, which is necessary given the new unique constraint onpeers.key. The use of simple unique strings ("key1", "key2") is appropriate for test data.Also applies to: 987-987, 1015-1015, 1029-1029, 1056-1056, 1065-1065, 1083-1083, 1092-1092
management/server/testdata/extended-store.sql (1)
17-17: LGTM! Index change consistent with unique constraint enforcement.The index correctly enforces uniqueness on the peer key column, matching the model changes and other test data files.
management/server/testdata/store_with_expired_peers.sql (2)
17-17: LGTM! Unique index correctly applied.The index change enforces uniqueness on the peer key column, consistent with the model changes.
33-33: LGTM! Test data updated with unique key.The peer row has been updated with a unique key value (
nVABSKj28gv+JRsf7e0NEGKgSOGTfU/nPB2cpuG56HX=) that doesn't conflict with other peers in the test data, ensuring compliance with the new unique constraint.management/server/peer/peer.go (1)
22-22: Thelength:64constraint is correctly handled via dialect-aware migration code, not a concern.The struct tag is misleading but the custom migration at
management/server/store/store.go:393usesCreateIndexIfNotExists, which explicitly constructs database-specific SQL:
- MySQL: applies
key(64)prefix length- PostgreSQL/SQLite: omits the length constraint and uses just
keyThis ensures correct behavior across all supported database engines. The production code does not have the issue described.
management/server/testdata/store.sql (2)
21-21: LGTM - Unique index enforces peer key uniqueness.The conversion from a non-unique to a unique index correctly implements the constraint to prevent duplicate peer keys.
57-57: LGTM - Test data updated with valid peer key.The peer record now includes a non-empty base64-encoded key value, which aligns with the new unique constraint.
management/server/store/store.go (2)
353-356: LGTM - Duplicate removal in pre-auto migration is correctly placed.Removing duplicate peer keys before auto-migration ensures the unique constraint can be applied without conflicts. The placement in pre-auto migrations is appropriate since this cleanup must occur before GORM processes the
uniqueIndexstruct tag.
389-394: LGTM - Post-auto index management is properly sequenced.The two-step process correctly handles the index transition:
- Drop the old non-unique
idx_peers_keyindex- Create the new unique
idx_peers_key_uniqueindexThis approach is safe because CreateIndexIfNotExists will skip creation if GORM's auto-migration already created the index from the struct tag.
management/server/testdata/storev1.sql (1)
17-17: LGTM - Consistent unique index enforcement across test data.The index change matches the pattern in other test data files, ensuring consistency across the test suite.
management/server/migration/migration.go (3)
407-407: LGTM - MySQL length specification for key column is necessary.Adding
keyto the list of columns requiring length specification in MySQL is correct. MySQL requires a length prefix for TEXT columns in indexes, and 64 characters is appropriate for the base64-encoded peer keys.
523-523: Good strategy: keeping the most recently seen peer.Ordering by
peer_status_last_seen DESCand keeping the first peer is a sensible heuristic for identifying the active/current peer among duplicates. This minimizes disruption to active connections.
528-530: Good defensive check for edge case.The check for
len(peerIDs) <= 1handles the unlikely scenario where peers might be deleted between queries or if the duplicate detection query had a false positive. This prevents unnecessary processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR fixes an issue where duplicate peers could exist in the database with the same WireGuard public key. The fix adds a unique constraint on the peer key field to prevent future duplicates and includes a migration to clean up existing duplicates.
Key Changes:
- Adds a unique index constraint on the peer
keyfield to enforce uniqueness at the database level - Implements a data migration (
RemoveDuplicatePeerKeys) that removes duplicate peers, keeping only the most recently seen peer for each duplicate key - Updates test data files to ensure all peers have unique keys and reflect the new index structure
Reviewed changes
Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
management/server/peer/peer.go |
Updated the Key field gorm tag to use uniqueIndex:idx_peers_key_unique instead of a regular index |
management/server/migration/migration.go |
Added RemoveDuplicatePeerKeys function to clean up duplicates and updated CreateIndexIfNotExists to handle the "key" column for MySQL |
management/server/store/store.go |
Registered the new migration functions: RemoveDuplicatePeerKeys in pre-auto migrations, and index drop/create in post-auto migrations |
management/server/store/sql_store_test.go |
Added Key field values to test peer objects to comply with the new unique constraint |
management/server/testdata/storev1.sql |
Changed index from idx_peers_key to unique index idx_peers_key_unique |
management/server/testdata/store_with_expired_peers.sql |
Changed index to unique and fixed duplicate peer key in test data |
management/server/testdata/store_policy_migrate.sql |
Changed index from idx_peers_key to unique index idx_peers_key_unique |
management/server/testdata/store.sql |
Changed index to unique and added a proper key value to peer that previously had empty key |
management/server/testdata/extended-store.sql |
Changed index from idx_peers_key to unique index idx_peers_key_unique |
Comments suppressed due to low confidence (1)
management/server/migration/migration.go:543
- The RemoveDuplicatePeerKeys function lacks test coverage. Given that this is a critical data migration function that deletes peer records, it should have comprehensive test coverage to verify its behavior with different scenarios: no duplicates, single duplicate, multiple duplicates, and edge cases where the key field is empty or null.
func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error {
if !db.Migrator().HasTable("peers") {
log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup")
return nil
}
keyColumn := GetColumnName(db, "key")
var duplicates []struct {
Key string
Count int64
}
if err := db.Table("peers").
Select(keyColumn + ", COUNT(*) as count").
Group(keyColumn).
Having("COUNT(*) > 1").
Find(&duplicates).Error; err != nil {
return fmt.Errorf("find duplicate keys: %w", err)
}
if len(duplicates) == 0 {
return nil
}
log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates))
for _, dup := range duplicates {
var peerIDs []string
if err := db.Table("peers").
Select("id").
Where(keyColumn+" = ?", dup.Key).
Order("peer_status_last_seen DESC").
Pluck("id", &peerIDs).Error; err != nil {
return fmt.Errorf("get peers for key: %w", err)
}
if len(peerIDs) <= 1 {
continue
}
idsToDelete := peerIDs[1:]
if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
return fmt.Errorf("delete duplicate peers: %w", err)
}
}
return nil
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @management/server/migration/migration.go:
- Around line 528-530: The condition check `if len(peerIDs) <= 1` in the
duplication cleanup loop (the branch that follows the DB query that found
COUNT(*) > 1) can hide a race or logic bug; update the branch to log a warning
including contextual identifiers (e.g., the duplicate key/peer identifier, the
SQL query parameters or returned peerIDs, and any shard/partition IDs) whenever
it occurs so you can trace deletions that happen between queries; keep the
existing continue behavior but emit a structured processLogger.Warn or similar
with the function/method context (referencing the variables `peerIDs` and the
duplicate-detection query) to aid debugging.
- Around line 491-540: The duplicate-removal loop in RemoveDuplicatePeerKeys
runs deletes outside any transaction; wrap the lookup-and-delete work in a
single DB transaction so failures roll back. Change the logic to call
db.Transaction(func(tx *gorm.DB) error { ... }) and inside use
tx.Table(...).Select(...).Where(...).Order(...).Pluck(...) and
tx.Table(...).Where(...).Delete(...) for each duplicate key, returning the error
to abort/rollback the transaction; on success let the transaction commit and
return nil from the Transaction callback. Ensure error messages remain wrapped
(fmt.Errorf(...)) when returning from the transaction.
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)
516-537: Enhance logging for debugging and confirmation.Consider adding:
- Sample duplicate keys in the warning message for debugging.
- A completion log message confirming successful cleanup.
📋 Proposed logging improvements
- log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates)) + sampleKeys := make([]string, 0, min(3, len(duplicates))) + for i := 0; i < min(3, len(duplicates)); i++ { + keyPreview := duplicates[i].Key + if len(keyPreview) > 16 { + keyPreview = keyPreview[:16] + "..." + } + sampleKeys = append(sampleKeys, keyPreview) + } + log.WithContext(ctx).Warnf("Found %d duplicate peer keys (samples: %v), cleaning up", len(duplicates), sampleKeys) for _, dup := range duplicates { // ... existing code ... } + log.WithContext(ctx).Infof("Successfully cleaned up %d duplicate peer keys", len(duplicates)) return nil }Note: The
minfunction may need to be defined if not available in your Go version.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
management/server/migration/migration.gomanagement/server/store/sql_store_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
- management/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
- GitHub Check: iOS / Build
- GitHub Check: Android / Build
- GitHub Check: Build Cache
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: release
- GitHub Check: FreeBSD Port / Build & Test
- GitHub Check: release_ui
- GitHub Check: release_ui_darwin
- GitHub Check: Darwin
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
🔇 Additional comments (1)
management/server/migration/migration.go (1)
407-407: LGTM! MySQL index length specification for key column.Adding "key" to the list of columns requiring length specification (64) for MySQL indexes is correct and aligns with the migration to enforce a unique constraint on peers.key.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
management/server/store/sql_store_test.go (2)
313-319: Give the SQLite delete-account peer a concrete keyAssigning
Key: "peerkey"to the peer used inTestSqlite_DeleteAccountmeans the subsequentGetAccountByPeerPubKey("peerkey")check now relates to an actual indexed key that was present before deletion, making the test more accurate. You might optionally assert that lookup succeeds before callingDeleteAccountto tighten the test, but it’s not required.
968-989: Assign distinct keys inTestSqlite_GetTakenIPspeersThe new
Key: "key1"/"key2"values ensure the additional peers comply with the unique key constraint while the test still only asserts on collected IPs. Consider keeping this pattern (unique, test-specific key strings) for any future peers added against the same fixtures.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
management/server/account_test.gomanagement/server/group_test.gomanagement/server/peer_test.gomanagement/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (25)
- GitHub Check: Client / Unit
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit (386)
- GitHub Check: release
- GitHub Check: release_ui
- GitHub Check: FreeBSD Port / Build & Test
- GitHub Check: release_ui_darwin
- GitHub Check: Darwin
- GitHub Check: Windows
- GitHub Check: Linux
- GitHub Check: Android / Build
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
- GitHub Check: Client / Unit
🔇 Additional comments (15)
management/server/group_test.go (1)
893-899: PopulatePeer.Keyin concurrency test to align with unique key constraintUsing
Key: "key" + strconv.Itoa(i)gives each test peer a deterministic, unique key, which keeps this stress test compatible with the new unique index onpeers.key. Looks good.management/server/peer_test.go (2)
2249-2253: SetKeyon peers inTest_AddPeerto avoid empty-key insertsAssigning
Key: "key" + strconv.Itoa(i)ensures each concurrently added peer has a unique, non-empty key, matching the DB’s unique constraint onpeers.keyand keeping the benchmark-like test focused on AddPeer performance rather than constraint violations.
2129-2143: InitializePeer.KeyinTest_DeletePeerfixture peersAdding
Key: "key1"/Key: "key2"to the in-memory peers makes the fixture consistent with the unique key constraint and with how production peers are indexed, while keeping the test’s behavior (delete by ID, group membership update) unchanged.management/server/store/sql_store_test.go (11)
94-103: UsepeerIDasKeyin large account save testSetting
Key: peerIDguarantees uniqueness across the 6000 generated peers and keepsrunLargeTestcompatible with the unique index onpeers.keywithout changing what the test measures.
232-252: Populate peer keys inTest_SaveAccountAdding
"peerkey"/"peerkey2"for the two test peers ensures the peer-key index is populated with non-empty values and keeps the variousGetAccountBy*index lookups meaningful. The change is self‑contained to this empty test DB per engine.
452-460: EnsureSavePeertest peer has a non-empty keyProviding
Key: "peerkey"here keepsSavePeerinteractions aligned with the uniquepeers.keycolumn and avoids relying on empty-string behavior. The rest of the test (NotFound on unknown peer, then successful update) remains unchanged.
509-516: Set key on peer inTestSqlStore_SavePeerStatusAdding
Key: "peerkey"to the helper peer makes the in-memory account state match the DB schema expectations without affecting the intended status-only assertions.
738-744: Make helper-created peers globally unique viapeerkeyprefixUsing
Key: "peerkey" + strinnewAccountguarantees globally unique keys for all accounts created by this helper, which is important now thatpeers.keyhas a global unique index. This is a good choice.
1011-1031: Set keys for peers inTestSqlite_GetPeerLabelsInAccountAdding
Key: "key1"/"key2"makes these peers consistent with the unique key index; the test’s focus on label collection remains intact and unaffected.
1052-1066: Decouple key uniqueness from DNS-label uniqueness testsBy giving
peer1andpeer2different keys ("key1","key2") but the same DNS label,Test_AddPeerWithSameDnsLabelnow clearly exercises the DNS-label uniqueness path rather than accidentally hitting the key constraint. This matches the test’s intent.
1079-1093: Decouple key uniqueness from IP uniqueness testsSimilarly, distinct keys for
peer1andpeer2inTest_AddPeerWithSameIPensure the failure is driven purely by duplicate IPs, keeping the test aligned with its purpose after key became unique.
3704-3710: Give benchmark peers deterministic unique keysUsing
Key: fmt.Sprintf("key-%d", i)inBenchmarkGetAccountPeersavoids unique-constraint failures while seeding 1000 peers and provides realistic data for any future key-based lookups in this benchmark.
3871-3878: Good coverage of key→user mapping inGetUserIDByPeerKeyThe test peer now has
Key: peerKey, and the test correctly asserts thatGetUserIDByPeerKeyreturns the associateduserID, validating the new lookup path via the unique key index.
3908-3915: Explicitly cover peers with no associated user inGetUserIDByPeerKeyHere the peer has a key but an empty
UserID, and the test verifies thatGetUserIDByPeerKeyreturns""without error. This neatly documents the expected behavior for key-only peers.management/server/account_test.go (1)
3468-3474: LGTM! Key fields correctly added for unique constraint.The addition of unique
Keyvalues ("key1" and "key2") to the peer instances aligns with the PR's objective to enforce unique keys and prevent duplicates. The keys are distinct, non-empty, and consistent with the simplified test key patterns used elsewhere in this file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)
491-540: Consider wrapping in a transaction and adding a completion log.Two observations:
Missing transaction: Other migration functions in this file use
db.Transaction()for atomicity. While this cleanup is idempotent and can recover on re-run, wrapping the detection and deletion in a transaction would ensure consistency if a failure occurs mid-operation.Missing completion log: For consistency with other migration functions (e.g., lines 119, 221, 358), consider adding a completion log message.
♻️ Suggested improvements
func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error { if !db.Migrator().HasTable("peers") { log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup") return nil } keyColumn := GetColumnName(db, "key") var duplicates []struct { Key string Count int64 } if err := db.Table("peers"). Select(keyColumn + ", COUNT(*) as count"). Group(keyColumn). Having("COUNT(*) > 1"). Find(&duplicates).Error; err != nil { return fmt.Errorf("find duplicate keys: %w", err) } if len(duplicates) == 0 { return nil } log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates)) - for _, dup := range duplicates { - var peerIDs []string - if err := db.Table("peers"). - Select("id"). - Where(keyColumn+" = ?", dup.Key). - Order("peer_status_last_seen DESC"). - Pluck("id", &peerIDs).Error; err != nil { - return fmt.Errorf("get peers for key: %w", err) - } - - if len(peerIDs) <= 1 { - continue - } - - idsToDelete := peerIDs[1:] - - if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil { - return fmt.Errorf("delete duplicate peers: %w", err) - } - } - - return nil + if err := db.Transaction(func(tx *gorm.DB) error { + for _, dup := range duplicates { + var peerIDs []string + if err := tx.Table("peers"). + Select("id"). + Where(keyColumn+" = ?", dup.Key). + Order("peer_status_last_seen DESC"). + Pluck("id", &peerIDs).Error; err != nil { + return fmt.Errorf("get peers for key: %w", err) + } + + if len(peerIDs) <= 1 { + continue + } + + idsToDelete := peerIDs[1:] + + if err := tx.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil { + return fmt.Errorf("delete duplicate peers: %w", err) + } + } + return nil + }); err != nil { + return err + } + + log.WithContext(ctx).Infof("Duplicate peer key cleanup completed") + return nil }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
management/server/account_test.gomanagement/server/migration/migration.gomanagement/server/store/sql_store_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
- management/server/account_test.go
- management/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
- GitHub Check: Build Cache
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: iOS / Build
- GitHub Check: Android / Build
- GitHub Check: release
- GitHub Check: Linux
- GitHub Check: release_ui
- GitHub Check: release_ui_darwin
- GitHub Check: Windows
- GitHub Check: Darwin
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
🔇 Additional comments (1)
management/server/migration/migration.go (1)
407-411: LGTM!Adding
keyto the columns requiring a length specification for MySQL indexes is appropriate and consistent with the handling foripanddns_label.
|
should we remove the gorm tag from the peers struct and just leave a comment? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (2)
management/server/migration/migration_test.go (2)
344-349: Consider removing the unusedPeerStatusConnectedfield.The
PeerStatusConnectedfield is defined but not used in any of the tests or in theRemoveDuplicatePeerKeysfunction (which only orders bypeer_status_last_seen). Removing it would make the test struct more minimal and focused.♻️ Proposed cleanup
type testPeer struct { ID string `gorm:"primaryKey"` Key string `gorm:"index"` PeerStatusLastSeen time.Time - PeerStatusConnected bool }
387-428: Excellent test coverage for the duplicate removal logic!The test thoroughly verifies the core functionality:
- Correctly sets up multiple duplicates with varying timestamps
- Validates that the most recent peer for each key is retained
- Checks both the final count and the specific peer IDs that should remain or be deleted
The assertions are comprehensive and the test data is well-structured.
Optional enhancement: Consider adding an edge case test for peers with the same key and identical
LastSeentimestamps. The deletion order might be non-deterministic in such cases, though this scenario is unlikely in production.
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
management/server/migration/migration_test.go
🧰 Additional context used
🧬 Code graph analysis (1)
management/server/migration/migration_test.go (1)
management/server/migration/migration.go (1)
RemoveDuplicatePeerKeys(491-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: Build Cache
- GitHub Check: Linux
- GitHub Check: Darwin
- GitHub Check: Android / Build
- GitHub Check: Windows
- GitHub Check: iOS / Build
- GitHub Check: release
- GitHub Check: release_ui_darwin
- GitHub Check: release_ui
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
🔇 Additional comments (4)
management/server/migration/migration_test.go (4)
355-362: LGTM! Clean test setup helper.The helper follows Go testing best practices by marking itself with
t.Helper(), ensuring a fresh table state by dropping and recreating, and handling errors appropriately.
364-385: LGTM! Good baseline test for the no-duplicates case.This test correctly verifies that when all peer keys are unique, the cleanup function leaves all peers intact. The test is clear and well-structured.
430-435: LGTM! Good edge case coverage.This test ensures the cleanup function handles an empty table gracefully, which is important for robustness.
437-443: LGTM! Essential edge case for migration safety.This test verifies that the cleanup function safely handles the case where the peers table doesn't exist, which is crucial for migration robustness. The test correctly aligns with the
HasTablecheck in theRemoveDuplicatePeerKeysimplementation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
🤖 Fix all issues with AI agents
In @management/server/migration/migration.go:
- Around line 492-541: The cleanup iterates and deletes duplicate rows without a
transaction, risking partial cleanup on error; wrap the duplicate-finding and
all per-duplicate queries/deletes inside a single DB transaction in
RemoveDuplicatePeerKeys by using db.Transaction (or tx :=
db.Begin()/tx.Commit()/tx.Rollback()) and switch subsequent calls
(Select/Group/Having/Find, Select/Where/Order/Pluck, and Delete) to use the
transactional *gorm.DB (tx) so any error returns from the transaction handler
cause a rollback and successful completion commits.
- Line 524: The ORDER clause currently uses Order("peer_status_last_seen DESC")
which leaves NULL ordering DB-dependent; update that Order usage to explicitly
put NULLs last so deduplication is deterministic—either switch to a GORM-aware
form that specifies NULLS LAST (e.g., using GORM/clauses.NullsLast on the
peer_status_last_seen column) or replace the order expression with a
COALESCE-based fallback (e.g., COALESCE(peer_status_last_seen, ...) DESC) to
ensure NULL values are treated as oldest and always sorted last.
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)
513-515: Consider logging when no duplicates are found.The function returns early without logging when no duplicates are found, which could make debugging or auditing migration runs more difficult.
📝 Suggested logging addition
if len(duplicates) == 0 { + log.WithContext(ctx).Debug("No duplicate peer keys found") return nil }
📜 Review details
Configuration used: defaults
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
management/server/migration/migration.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Signal / Unit (amd64)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: release
- GitHub Check: release_ui_darwin
- GitHub Check: release_ui
- GitHub Check: Android / Build
- GitHub Check: iOS / Build
- GitHub Check: Client / Unit
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: JS / Lint
- GitHub Check: Darwin
🔇 Additional comments (1)
management/server/migration/migration.go (1)
407-411: LGTM! Proper MySQL TEXT column index handling.The changes correctly extend the index creation logic to include the
keycolumn with a length specifier, which is required for indexing TEXT columns in MySQL. The backtick quoting for all columns is appropriate for MySQL.
|



Describe your changes
Issue ticket number and link
Stack
Checklist
Documentation
Select exactly one:
small bug fix
Docs PR URL (required if "docs added" is checked)
Paste the PR link from https://github.com/netbirdio/docs here:
https://github.com/netbirdio/docs/pull/__
Summary by CodeRabbit
Bug Fixes
Database
Tests
✏️ Tip: You can customize this high-level summary in your review settings.