Skip to content

Conversation

@crn4
Copy link
Contributor

@crn4 crn4 commented Jan 7, 2026

Describe your changes

Issue ticket number and link

Stack

Checklist

  • Is it a bug fix
  • Is a typo/documentation fix
  • Is a feature enhancement
  • It is a refactor
  • Created tests that fail without the change (if possible)

By submitting this pull request, you confirm that you have read and agree to the terms of the Contributor License Agreement.

Documentation

Select exactly one:

  • I added/updated documentation for this change
  • Documentation is not needed for this change (explain why)

small bug fix

Docs PR URL (required if "docs added" is checked)

Paste the PR link from https://github.com/netbirdio/docs here:

https://github.com/netbirdio/docs/pull/__

Summary by CodeRabbit

  • Bug Fixes

    • Remove older duplicate peer entries so only the most recently-seen peer per key remains, preventing conflicts during migration.
  • Database

    • Enforce uniqueness on peer keys by converting non-unique indexes to unique and adding safe pre/post migration steps; improve index handling for MySQL compatibility.
  • Tests

    • Update fixtures to include explicit peer key values and add tests covering duplicate-key detection and removal during migration.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings January 7, 2026 16:40
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 7, 2026

📝 Walkthrough

Walkthrough

Add a migration to remove duplicate peer keys before creating a unique index on peers.key; adjust index creation for MySQL; remove the GORM index tag from Peer.Key; update SQL fixtures and tests to reflect a unique peers.key and include Peer.Key in test data.

Changes

Cohort / File(s) Summary
Migration logic
management/server/migration/migration.go
Added RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error: skips if peers table missing, finds duplicate key groups, keeps the most-recent peer per key (by peer_status_last_seen) and deletes older duplicates; returns errors and logs progress.
Migration wiring
management/server/store/store.go
Registered RemoveDuplicatePeerKeys in pre-auto migrations; added post-auto steps to drop idx_peers_key and create unique idx_peers_key_unique (includes MySQL column quoting and length specifiers for ip, dns_label, and key).
Peer model
management/server/peer/peer.go
Removed gorm:"index" tag from Peer.Key (field now Key string // uniqueness index (check migrations)).
SQL schema / fixtures
management/server/testdata/...
management/server/testdata/store.sql, .../storev1.sql, .../extended-store.sql, .../store_policy_migrate.sql, .../store_with_expired_peers.sql
Replaced non-unique index idx_peers_key with unique idx_peers_key_unique on peers(key) across fixtures; updated sample rows to include non-empty key values where applicable.
Tests & test data updates
management/server/store/sql_store_test.go, management/server/account_test.go, management/server/group_test.go, management/server/peer_test.go, management/server/migration/migration_test.go
Added/populated Key field on test nbpeer.Peer instances (e.g., "key1", "key2"). Added tests for RemoveDuplicatePeerKeys covering no-duplicates, duplicates (retain most-recent), empty table, and missing-table scenarios.

Sequence Diagram(s)

sequenceDiagram
    actor MigrationRunner
    participant DB
    participant Logger

    MigrationRunner->>DB: Check if table `peers` exists
    alt table missing
        DB-->>MigrationRunner: Table not found
        MigrationRunner->>Logger: Log and skip RemoveDuplicatePeerKeys
    else table exists
        MigrationRunner->>DB: Query duplicate keys (GROUP BY key HAVING COUNT(*)>1)
        DB-->>MigrationRunner: List of duplicate keys
        loop for each duplicate key
            MigrationRunner->>DB: SELECT id FROM peers WHERE key = X ORDER BY peer_status_last_seen DESC
            DB-->>MigrationRunner: Ordered peer ids
            MigrationRunner->>DB: DELETE FROM peers WHERE id IN (all but first id)
            DB-->>MigrationRunner: Deletion result
            MigrationRunner->>Logger: Log deletion summary/warnings
        end
        MigrationRunner->>DB: Drop old index `idx_peers_key`
        DB-->>MigrationRunner: Drop result
        MigrationRunner->>DB: Create unique index `idx_peers_key_unique` (MySQL quoting/length handling)
        DB-->>MigrationRunner: Create result
        MigrationRunner->>Logger: Log completion
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Suggested reviewers

  • pascal-fischer

Poem

🐰 I hopped through rows both old and new,
Nibbled duplicates down to one true view,
Replanted indexes, tidy and bright,
Kept the newest friend in the warm sunlight,
A little hop to keep the data right.

🚥 Pre-merge checks | ✅ 1 | ❌ 2
❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description is incomplete; it lacks implementation details, issue linkage, and explanation of changes. Only the checklist is filled (marking as bug fix and stating docs not needed). Add details about the approach (unique index enforcement, duplicate removal migration) and link the relevant issue ticket for context.
Docstring Coverage ⚠️ Warning Docstring coverage is 5.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title directly relates to the main changeset: fixing duplicated peers sharing the same key by enforcing a unique index and removing duplicates.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 57719ae and d180e25.

📒 Files selected for processing (1)
  • management/server/peer/peer.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • management/server/peer/peer.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (24)
  • GitHub Check: Management / Benchmark (API) (amd64, sqlite)
  • GitHub Check: Management / Integration (amd64, postgres)
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Management / Integration (amd64, sqlite)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: Management / Benchmark (API) (amd64, postgres)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Client / Unit (386)
  • GitHub Check: Management / Benchmark (amd64, sqlite)
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Management / Benchmark (amd64, postgres)
  • GitHub Check: Client / Unit
  • GitHub Check: release
  • GitHub Check: release_ui_darwin
  • GitHub Check: Client / Unit
  • GitHub Check: JS / Lint
  • GitHub Check: Android / Build
  • GitHub Check: Darwin
  • GitHub Check: Windows
  • GitHub Check: Client / Unit
  • GitHub Check: Linux

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
management/server/migration/migration.go (1)

491-543: Consider wrapping deletion operations in a transaction.

The function deletes duplicate peers across multiple iterations without transaction protection. If deletion fails partway through (e.g., due to foreign key constraints or database errors), the database could be left in a partially cleaned state, with some duplicates removed and others remaining.

🔄 Wrap deletion loop in transaction
 func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error {
 	if !db.Migrator().HasTable("peers") {
 		log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup")
 		return nil
 	}
 
 	keyColumn := GetColumnName(db, "key")
 
 	var duplicates []struct {
 		Key   string
 		Count int64
 	}
 
 	if err := db.Table("peers").
 		Select(keyColumn + ", COUNT(*) as count").
 		Group(keyColumn).
 		Having("COUNT(*) > 1").
 		Find(&duplicates).Error; err != nil {
 		return fmt.Errorf("find duplicate keys: %w", err)
 	}
 
 	if len(duplicates) == 0 {
 		return nil
 	}
 
 	log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates))
 
+	return db.Transaction(func(tx *gorm.DB) error {
-	for _, dup := range duplicates {
+		for _, dup := range duplicates {
-		var peerIDs []string
-		if err := db.Table("peers").
+			var peerIDs []string
+			if err := tx.Table("peers").
 				Select("id").
 				Where(keyColumn+" = ?", dup.Key).
 				Order("peer_status_last_seen DESC").
 				Pluck("id", &peerIDs).Error; err != nil {
 				return fmt.Errorf("get peers for key: %w", err)
-		}
+			}
 
-		if len(peerIDs) <= 1 {
-			continue
-		}
+			if len(peerIDs) <= 1 {
+				continue
+			}
 
-		idsToDelete := peerIDs[1:]
-		log.WithContext(ctx).Warnf("Deleting %d duplicate peers with key %s...: %v",
-			len(idsToDelete), dup.Key[:min(8, len(dup.Key))], idsToDelete)
+			idsToDelete := peerIDs[1:]
+			log.WithContext(ctx).Warnf("Deleting %d duplicate peers with key %s...: %v",
+				len(idsToDelete), dup.Key[:min(8, len(dup.Key))], idsToDelete)
 
-		if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
-			return fmt.Errorf("delete duplicate peers: %w", err)
+			if err := tx.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
+				return fmt.Errorf("delete duplicate peers: %w", err)
+			}
 		}
-	}
 
-	log.WithContext(ctx).Info("Duplicate peer keys cleanup completed")
-	return nil
+		log.WithContext(ctx).Info("Duplicate peer keys cleanup completed")
+		return nil
+	})
 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 8722b79 and 4454e9b.

📒 Files selected for processing (9)
  • management/server/migration/migration.go
  • management/server/peer/peer.go
  • management/server/store/sql_store_test.go
  • management/server/store/store.go
  • management/server/testdata/extended-store.sql
  • management/server/testdata/store.sql
  • management/server/testdata/store_policy_migrate.sql
  • management/server/testdata/store_with_expired_peers.sql
  • management/server/testdata/storev1.sql
🧰 Additional context used
🧬 Code graph analysis (1)
management/server/store/store.go (2)
management/server/migration/migration.go (3)
  • RemoveDuplicatePeerKeys (491-543)
  • DropIndex (362-381)
  • CreateIndexIfNotExists (383-430)
management/server/peer/peer.go (1)
  • Peer (16-58)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (12)
  • GitHub Check: CodeQL analysis (go)
  • GitHub Check: Agent
  • GitHub Check: release_ui
  • GitHub Check: FreeBSD Port / Build & Test
  • GitHub Check: release
  • GitHub Check: Android / Build
  • GitHub Check: Client / Unit
  • GitHub Check: Build Cache
  • GitHub Check: Client / Unit
  • GitHub Check: Linux
  • GitHub Check: Windows
  • GitHub Check: JS / Lint
🔇 Additional comments (14)
management/server/testdata/store_policy_migrate.sql (1)

17-17: LGTM! Index change aligns with unique constraint enforcement.

The index has been correctly updated to enforce uniqueness on the peer key column, matching the changes in the Peer model and other test data files.

management/server/store/sql_store_test.go (1)

972-972: LGTM! Test data updated to accommodate unique key constraint.

All test peer initializations now include explicit Key field values, which is necessary given the new unique constraint on peers.key. The use of simple unique strings ("key1", "key2") is appropriate for test data.

Also applies to: 987-987, 1015-1015, 1029-1029, 1056-1056, 1065-1065, 1083-1083, 1092-1092

management/server/testdata/extended-store.sql (1)

17-17: LGTM! Index change consistent with unique constraint enforcement.

The index correctly enforces uniqueness on the peer key column, matching the model changes and other test data files.

management/server/testdata/store_with_expired_peers.sql (2)

17-17: LGTM! Unique index correctly applied.

The index change enforces uniqueness on the peer key column, consistent with the model changes.


33-33: LGTM! Test data updated with unique key.

The peer row has been updated with a unique key value (nVABSKj28gv+JRsf7e0NEGKgSOGTfU/nPB2cpuG56HX=) that doesn't conflict with other peers in the test data, ensuring compliance with the new unique constraint.

management/server/peer/peer.go (1)

22-22: The length:64 constraint is correctly handled via dialect-aware migration code, not a concern.

The struct tag is misleading but the custom migration at management/server/store/store.go:393 uses CreateIndexIfNotExists, which explicitly constructs database-specific SQL:

  • MySQL: applies key(64) prefix length
  • PostgreSQL/SQLite: omits the length constraint and uses just key

This ensures correct behavior across all supported database engines. The production code does not have the issue described.

management/server/testdata/store.sql (2)

21-21: LGTM - Unique index enforces peer key uniqueness.

The conversion from a non-unique to a unique index correctly implements the constraint to prevent duplicate peer keys.


57-57: LGTM - Test data updated with valid peer key.

The peer record now includes a non-empty base64-encoded key value, which aligns with the new unique constraint.

management/server/store/store.go (2)

353-356: LGTM - Duplicate removal in pre-auto migration is correctly placed.

Removing duplicate peer keys before auto-migration ensures the unique constraint can be applied without conflicts. The placement in pre-auto migrations is appropriate since this cleanup must occur before GORM processes the uniqueIndex struct tag.


389-394: LGTM - Post-auto index management is properly sequenced.

The two-step process correctly handles the index transition:

  1. Drop the old non-unique idx_peers_key index
  2. Create the new unique idx_peers_key_unique index

This approach is safe because CreateIndexIfNotExists will skip creation if GORM's auto-migration already created the index from the struct tag.

management/server/testdata/storev1.sql (1)

17-17: LGTM - Consistent unique index enforcement across test data.

The index change matches the pattern in other test data files, ensuring consistency across the test suite.

management/server/migration/migration.go (3)

407-407: LGTM - MySQL length specification for key column is necessary.

Adding key to the list of columns requiring length specification in MySQL is correct. MySQL requires a length prefix for TEXT columns in indexes, and 64 characters is appropriate for the base64-encoded peer keys.


523-523: Good strategy: keeping the most recently seen peer.

Ordering by peer_status_last_seen DESC and keeping the first peer is a sensible heuristic for identifying the active/current peer among duplicates. This minimizes disruption to active connections.


528-530: Good defensive check for edge case.

The check for len(peerIDs) <= 1 handles the unlikely scenario where peers might be deleted between queries or if the duplicate detection query had a false positive. This prevents unnecessary processing.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes an issue where duplicate peers could exist in the database with the same WireGuard public key. The fix adds a unique constraint on the peer key field to prevent future duplicates and includes a migration to clean up existing duplicates.

Key Changes:

  • Adds a unique index constraint on the peer key field to enforce uniqueness at the database level
  • Implements a data migration (RemoveDuplicatePeerKeys) that removes duplicate peers, keeping only the most recently seen peer for each duplicate key
  • Updates test data files to ensure all peers have unique keys and reflect the new index structure

Reviewed changes

Copilot reviewed 9 out of 9 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
management/server/peer/peer.go Updated the Key field gorm tag to use uniqueIndex:idx_peers_key_unique instead of a regular index
management/server/migration/migration.go Added RemoveDuplicatePeerKeys function to clean up duplicates and updated CreateIndexIfNotExists to handle the "key" column for MySQL
management/server/store/store.go Registered the new migration functions: RemoveDuplicatePeerKeys in pre-auto migrations, and index drop/create in post-auto migrations
management/server/store/sql_store_test.go Added Key field values to test peer objects to comply with the new unique constraint
management/server/testdata/storev1.sql Changed index from idx_peers_key to unique index idx_peers_key_unique
management/server/testdata/store_with_expired_peers.sql Changed index to unique and fixed duplicate peer key in test data
management/server/testdata/store_policy_migrate.sql Changed index from idx_peers_key to unique index idx_peers_key_unique
management/server/testdata/store.sql Changed index to unique and added a proper key value to peer that previously had empty key
management/server/testdata/extended-store.sql Changed index from idx_peers_key to unique index idx_peers_key_unique
Comments suppressed due to low confidence (1)

management/server/migration/migration.go:543

  • The RemoveDuplicatePeerKeys function lacks test coverage. Given that this is a critical data migration function that deletes peer records, it should have comprehensive test coverage to verify its behavior with different scenarios: no duplicates, single duplicate, multiple duplicates, and edge cases where the key field is empty or null.
func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error {
	if !db.Migrator().HasTable("peers") {
		log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup")
		return nil
	}

	keyColumn := GetColumnName(db, "key")

	var duplicates []struct {
		Key   string
		Count int64
	}

	if err := db.Table("peers").
		Select(keyColumn + ", COUNT(*) as count").
		Group(keyColumn).
		Having("COUNT(*) > 1").
		Find(&duplicates).Error; err != nil {
		return fmt.Errorf("find duplicate keys: %w", err)
	}

	if len(duplicates) == 0 {
		return nil
	}

	log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates))

	for _, dup := range duplicates {
		var peerIDs []string
		if err := db.Table("peers").
			Select("id").
			Where(keyColumn+" = ?", dup.Key).
			Order("peer_status_last_seen DESC").
			Pluck("id", &peerIDs).Error; err != nil {
			return fmt.Errorf("get peers for key: %w", err)
		}

		if len(peerIDs) <= 1 {
			continue
		}

		idsToDelete := peerIDs[1:]

		if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
			return fmt.Errorf("delete duplicate peers: %w", err)
		}
	}

	return nil
}


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @management/server/migration/migration.go:
- Around line 528-530: The condition check `if len(peerIDs) <= 1` in the
duplication cleanup loop (the branch that follows the DB query that found
COUNT(*) > 1) can hide a race or logic bug; update the branch to log a warning
including contextual identifiers (e.g., the duplicate key/peer identifier, the
SQL query parameters or returned peerIDs, and any shard/partition IDs) whenever
it occurs so you can trace deletions that happen between queries; keep the
existing continue behavior but emit a structured processLogger.Warn or similar
with the function/method context (referencing the variables `peerIDs` and the
duplicate-detection query) to aid debugging.
- Around line 491-540: The duplicate-removal loop in RemoveDuplicatePeerKeys
runs deletes outside any transaction; wrap the lookup-and-delete work in a
single DB transaction so failures roll back. Change the logic to call
db.Transaction(func(tx *gorm.DB) error { ... }) and inside use
tx.Table(...).Select(...).Where(...).Order(...).Pluck(...) and
tx.Table(...).Where(...).Delete(...) for each duplicate key, returning the error
to abort/rollback the transaction; on success let the transaction commit and
return nil from the Transaction callback. Ensure error messages remain wrapped
(fmt.Errorf(...)) when returning from the transaction.
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)

516-537: Enhance logging for debugging and confirmation.

Consider adding:

  1. Sample duplicate keys in the warning message for debugging.
  2. A completion log message confirming successful cleanup.
📋 Proposed logging improvements
-	log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates))
+	sampleKeys := make([]string, 0, min(3, len(duplicates)))
+	for i := 0; i < min(3, len(duplicates)); i++ {
+		keyPreview := duplicates[i].Key
+		if len(keyPreview) > 16 {
+			keyPreview = keyPreview[:16] + "..."
+		}
+		sampleKeys = append(sampleKeys, keyPreview)
+	}
+	log.WithContext(ctx).Warnf("Found %d duplicate peer keys (samples: %v), cleaning up", len(duplicates), sampleKeys)
 
 	for _, dup := range duplicates {
 		// ... existing code ...
 	}
 
+	log.WithContext(ctx).Infof("Successfully cleaned up %d duplicate peer keys", len(duplicates))
 	return nil
 }

Note: The min function may need to be defined if not available in your Go version.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4454e9b and b0b154f.

📒 Files selected for processing (2)
  • management/server/migration/migration.go
  • management/server/store/sql_store_test.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • management/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (14)
  • GitHub Check: iOS / Build
  • GitHub Check: Android / Build
  • GitHub Check: Build Cache
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: release
  • GitHub Check: FreeBSD Port / Build & Test
  • GitHub Check: release_ui
  • GitHub Check: release_ui_darwin
  • GitHub Check: Darwin
  • GitHub Check: Linux
  • GitHub Check: Windows
  • GitHub Check: Client / Unit
  • GitHub Check: JS / Lint
🔇 Additional comments (1)
management/server/migration/migration.go (1)

407-407: LGTM! MySQL index length specification for key column.

Adding "key" to the list of columns requiring length specification (64) for MySQL indexes is correct and aligns with the migration to enforce a unique constraint on peers.key.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
management/server/store/sql_store_test.go (2)

313-319: Give the SQLite delete-account peer a concrete key

Assigning Key: "peerkey" to the peer used in TestSqlite_DeleteAccount means the subsequent GetAccountByPeerPubKey("peerkey") check now relates to an actual indexed key that was present before deletion, making the test more accurate. You might optionally assert that lookup succeeds before calling DeleteAccount to tighten the test, but it’s not required.


968-989: Assign distinct keys in TestSqlite_GetTakenIPs peers

The new Key: "key1" / "key2" values ensure the additional peers comply with the unique key constraint while the test still only asserts on collected IPs. Consider keeping this pattern (unique, test-specific key strings) for any future peers added against the same fixtures.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b0b154f and ccacb90.

📒 Files selected for processing (4)
  • management/server/account_test.go
  • management/server/group_test.go
  • management/server/peer_test.go
  • management/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (25)
  • GitHub Check: Client / Unit
  • GitHub Check: Management / Benchmark (API) (amd64, postgres)
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: Management / Benchmark (API) (amd64, sqlite)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Management / Benchmark (amd64, sqlite)
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Management / Integration (amd64, postgres)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Management / Benchmark (amd64, postgres)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Client / Unit (386)
  • GitHub Check: release
  • GitHub Check: release_ui
  • GitHub Check: FreeBSD Port / Build & Test
  • GitHub Check: release_ui_darwin
  • GitHub Check: Darwin
  • GitHub Check: Windows
  • GitHub Check: Linux
  • GitHub Check: Android / Build
  • GitHub Check: Client / Unit
  • GitHub Check: JS / Lint
  • GitHub Check: Client / Unit
🔇 Additional comments (15)
management/server/group_test.go (1)

893-899: Populate Peer.Key in concurrency test to align with unique key constraint

Using Key: "key" + strconv.Itoa(i) gives each test peer a deterministic, unique key, which keeps this stress test compatible with the new unique index on peers.key. Looks good.

management/server/peer_test.go (2)

2249-2253: Set Key on peers in Test_AddPeer to avoid empty-key inserts

Assigning Key: "key" + strconv.Itoa(i) ensures each concurrently added peer has a unique, non-empty key, matching the DB’s unique constraint on peers.key and keeping the benchmark-like test focused on AddPeer performance rather than constraint violations.


2129-2143: Initialize Peer.Key in Test_DeletePeer fixture peers

Adding Key: "key1" / Key: "key2" to the in-memory peers makes the fixture consistent with the unique key constraint and with how production peers are indexed, while keeping the test’s behavior (delete by ID, group membership update) unchanged.

management/server/store/sql_store_test.go (11)

94-103: Use peerID as Key in large account save test

Setting Key: peerID guarantees uniqueness across the 6000 generated peers and keeps runLargeTest compatible with the unique index on peers.key without changing what the test measures.


232-252: Populate peer keys in Test_SaveAccount

Adding "peerkey" / "peerkey2" for the two test peers ensures the peer-key index is populated with non-empty values and keeps the various GetAccountBy* index lookups meaningful. The change is self‑contained to this empty test DB per engine.


452-460: Ensure SavePeer test peer has a non-empty key

Providing Key: "peerkey" here keeps SavePeer interactions aligned with the unique peers.key column and avoids relying on empty-string behavior. The rest of the test (NotFound on unknown peer, then successful update) remains unchanged.


509-516: Set key on peer in TestSqlStore_SavePeerStatus

Adding Key: "peerkey" to the helper peer makes the in-memory account state match the DB schema expectations without affecting the intended status-only assertions.


738-744: Make helper-created peers globally unique via peerkey prefix

Using Key: "peerkey" + str in newAccount guarantees globally unique keys for all accounts created by this helper, which is important now that peers.key has a global unique index. This is a good choice.


1011-1031: Set keys for peers in TestSqlite_GetPeerLabelsInAccount

Adding Key: "key1" / "key2" makes these peers consistent with the unique key index; the test’s focus on label collection remains intact and unaffected.


1052-1066: Decouple key uniqueness from DNS-label uniqueness tests

By giving peer1 and peer2 different keys ("key1", "key2") but the same DNS label, Test_AddPeerWithSameDnsLabel now clearly exercises the DNS-label uniqueness path rather than accidentally hitting the key constraint. This matches the test’s intent.


1079-1093: Decouple key uniqueness from IP uniqueness tests

Similarly, distinct keys for peer1 and peer2 in Test_AddPeerWithSameIP ensure the failure is driven purely by duplicate IPs, keeping the test aligned with its purpose after key became unique.


3704-3710: Give benchmark peers deterministic unique keys

Using Key: fmt.Sprintf("key-%d", i) in BenchmarkGetAccountPeers avoids unique-constraint failures while seeding 1000 peers and provides realistic data for any future key-based lookups in this benchmark.


3871-3878: Good coverage of key→user mapping in GetUserIDByPeerKey

The test peer now has Key: peerKey, and the test correctly asserts that GetUserIDByPeerKey returns the associated userID, validating the new lookup path via the unique key index.


3908-3915: Explicitly cover peers with no associated user in GetUserIDByPeerKey

Here the peer has a key but an empty UserID, and the test verifies that GetUserIDByPeerKey returns "" without error. This neatly documents the expected behavior for key-only peers.

management/server/account_test.go (1)

3468-3474: LGTM! Key fields correctly added for unique constraint.

The addition of unique Key values ("key1" and "key2") to the peer instances aligns with the PR's objective to enforce unique keys and prevent duplicates. The keys are distinct, non-empty, and consistent with the simplified test key patterns used elsewhere in this file.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
management/server/migration/migration.go (1)

491-540: Consider wrapping in a transaction and adding a completion log.

Two observations:

  1. Missing transaction: Other migration functions in this file use db.Transaction() for atomicity. While this cleanup is idempotent and can recover on re-run, wrapping the detection and deletion in a transaction would ensure consistency if a failure occurs mid-operation.

  2. Missing completion log: For consistency with other migration functions (e.g., lines 119, 221, 358), consider adding a completion log message.

♻️ Suggested improvements
 func RemoveDuplicatePeerKeys(ctx context.Context, db *gorm.DB) error {
 	if !db.Migrator().HasTable("peers") {
 		log.WithContext(ctx).Debug("peers table does not exist, skipping duplicate key cleanup")
 		return nil
 	}

 	keyColumn := GetColumnName(db, "key")

 	var duplicates []struct {
 		Key   string
 		Count int64
 	}

 	if err := db.Table("peers").
 		Select(keyColumn + ", COUNT(*) as count").
 		Group(keyColumn).
 		Having("COUNT(*) > 1").
 		Find(&duplicates).Error; err != nil {
 		return fmt.Errorf("find duplicate keys: %w", err)
 	}

 	if len(duplicates) == 0 {
 		return nil
 	}

 	log.WithContext(ctx).Warnf("Found %d duplicate peer keys, cleaning up", len(duplicates))

-	for _, dup := range duplicates {
-		var peerIDs []string
-		if err := db.Table("peers").
-			Select("id").
-			Where(keyColumn+" = ?", dup.Key).
-			Order("peer_status_last_seen DESC").
-			Pluck("id", &peerIDs).Error; err != nil {
-			return fmt.Errorf("get peers for key: %w", err)
-		}
-
-		if len(peerIDs) <= 1 {
-			continue
-		}
-
-		idsToDelete := peerIDs[1:]
-
-		if err := db.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
-			return fmt.Errorf("delete duplicate peers: %w", err)
-		}
-	}
-
-	return nil
+	if err := db.Transaction(func(tx *gorm.DB) error {
+		for _, dup := range duplicates {
+			var peerIDs []string
+			if err := tx.Table("peers").
+				Select("id").
+				Where(keyColumn+" = ?", dup.Key).
+				Order("peer_status_last_seen DESC").
+				Pluck("id", &peerIDs).Error; err != nil {
+				return fmt.Errorf("get peers for key: %w", err)
+			}
+
+			if len(peerIDs) <= 1 {
+				continue
+			}
+
+			idsToDelete := peerIDs[1:]
+
+			if err := tx.Table("peers").Where("id IN ?", idsToDelete).Delete(nil).Error; err != nil {
+				return fmt.Errorf("delete duplicate peers: %w", err)
+			}
+		}
+		return nil
+	}); err != nil {
+		return err
+	}
+
+	log.WithContext(ctx).Infof("Duplicate peer key cleanup completed")
+	return nil
 }
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ccacb90 and 876047e.

📒 Files selected for processing (3)
  • management/server/account_test.go
  • management/server/migration/migration.go
  • management/server/store/sql_store_test.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • management/server/account_test.go
  • management/server/store/sql_store_test.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Build Cache
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: iOS / Build
  • GitHub Check: Android / Build
  • GitHub Check: release
  • GitHub Check: Linux
  • GitHub Check: release_ui
  • GitHub Check: release_ui_darwin
  • GitHub Check: Windows
  • GitHub Check: Darwin
  • GitHub Check: Client / Unit
  • GitHub Check: JS / Lint
🔇 Additional comments (1)
management/server/migration/migration.go (1)

407-411: LGTM!

Adding key to the columns requiring a length specification for MySQL indexes is appropriate and consistent with the handling for ip and dns_label.

@pascal-fischer
Copy link
Collaborator

should we remove the gorm tag from the peers struct and just leave a comment?

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (2)
management/server/migration/migration_test.go (2)

344-349: Consider removing the unused PeerStatusConnected field.

The PeerStatusConnected field is defined but not used in any of the tests or in the RemoveDuplicatePeerKeys function (which only orders by peer_status_last_seen). Removing it would make the test struct more minimal and focused.

♻️ Proposed cleanup
 type testPeer struct {
 	ID                  string `gorm:"primaryKey"`
 	Key                 string `gorm:"index"`
 	PeerStatusLastSeen  time.Time
-	PeerStatusConnected bool
 }

387-428: Excellent test coverage for the duplicate removal logic!

The test thoroughly verifies the core functionality:

  • Correctly sets up multiple duplicates with varying timestamps
  • Validates that the most recent peer for each key is retained
  • Checks both the final count and the specific peer IDs that should remain or be deleted

The assertions are comprehensive and the test data is well-structured.

Optional enhancement: Consider adding an edge case test for peers with the same key and identical LastSeen timestamps. The deletion order might be non-deterministic in such cases, though this scenario is unlikely in production.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 876047e and 4c002a9.

📒 Files selected for processing (1)
  • management/server/migration/migration_test.go
🧰 Additional context used
🧬 Code graph analysis (1)
management/server/migration/migration_test.go (1)
management/server/migration/migration.go (1)
  • RemoveDuplicatePeerKeys (491-540)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (13)
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: Build Cache
  • GitHub Check: Linux
  • GitHub Check: Darwin
  • GitHub Check: Android / Build
  • GitHub Check: Windows
  • GitHub Check: iOS / Build
  • GitHub Check: release
  • GitHub Check: release_ui_darwin
  • GitHub Check: release_ui
  • GitHub Check: Client / Unit
  • GitHub Check: JS / Lint
🔇 Additional comments (4)
management/server/migration/migration_test.go (4)

355-362: LGTM! Clean test setup helper.

The helper follows Go testing best practices by marking itself with t.Helper(), ensuring a fresh table state by dropping and recreating, and handling errors appropriately.


364-385: LGTM! Good baseline test for the no-duplicates case.

This test correctly verifies that when all peer keys are unique, the cleanup function leaves all peers intact. The test is clear and well-structured.


430-435: LGTM! Good edge case coverage.

This test ensures the cleanup function handles an empty table gracefully, which is important for robustness.


437-443: LGTM! Essential edge case for migration safety.

This test verifies that the cleanup function safely handles the case where the peers table doesn't exist, which is crucial for migration robustness. The test correctly aligns with the HasTable check in the RemoveDuplicatePeerKeys implementation.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Fix all issues with AI agents
In @management/server/migration/migration.go:
- Around line 492-541: The cleanup iterates and deletes duplicate rows without a
transaction, risking partial cleanup on error; wrap the duplicate-finding and
all per-duplicate queries/deletes inside a single DB transaction in
RemoveDuplicatePeerKeys by using db.Transaction (or tx :=
db.Begin()/tx.Commit()/tx.Rollback()) and switch subsequent calls
(Select/Group/Having/Find, Select/Where/Order/Pluck, and Delete) to use the
transactional *gorm.DB (tx) so any error returns from the transaction handler
cause a rollback and successful completion commits.
- Line 524: The ORDER clause currently uses Order("peer_status_last_seen DESC")
which leaves NULL ordering DB-dependent; update that Order usage to explicitly
put NULLs last so deduplication is deterministic—either switch to a GORM-aware
form that specifies NULLS LAST (e.g., using GORM/clauses.NullsLast on the
peer_status_last_seen column) or replace the order expression with a
COALESCE-based fallback (e.g., COALESCE(peer_status_last_seen, ...) DESC) to
ensure NULL values are treated as oldest and always sorted last.
🧹 Nitpick comments (1)
management/server/migration/migration.go (1)

513-515: Consider logging when no duplicates are found.

The function returns early without logging when no duplicates are found, which could make debugging or auditing migration runs more difficult.

📝 Suggested logging addition
 	if len(duplicates) == 0 {
+		log.WithContext(ctx).Debug("No duplicate peer keys found")
 		return nil
 	}
📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 1bf4585 and 57719ae.

📒 Files selected for processing (1)
  • management/server/migration/migration.go
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
  • GitHub Check: Management / Benchmark (API) (amd64, postgres)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Signal / Unit (amd64)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Management / Integration (amd64, sqlite)
  • GitHub Check: Management / Integration (amd64, postgres)
  • GitHub Check: Management / Benchmark (amd64, postgres)
  • GitHub Check: Management / Benchmark (API) (amd64, sqlite)
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Management / Benchmark (amd64, sqlite)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: Client / Unit (386)
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: release
  • GitHub Check: release_ui_darwin
  • GitHub Check: release_ui
  • GitHub Check: Android / Build
  • GitHub Check: iOS / Build
  • GitHub Check: Client / Unit
  • GitHub Check: Linux
  • GitHub Check: Windows
  • GitHub Check: JS / Lint
  • GitHub Check: Darwin
🔇 Additional comments (1)
management/server/migration/migration.go (1)

407-411: LGTM! Proper MySQL TEXT column index handling.

The changes correctly extend the index creation logic to include the key column with a length specifier, which is required for indexing TEXT columns in MySQL. The backtick quoting for all columns is appropriate for MySQL.

@sonarqubecloud
Copy link

sonarqubecloud bot commented Jan 8, 2026

@crn4 crn4 merged commit 684fc0d into main Jan 9, 2026
41 checks passed
@crn4 crn4 deleted the fix/key-constraint branch January 9, 2026 10:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants