Skip to content

Conversation

@mrproliu
Copy link
Contributor

  • If this pull request closes/resolves/fixes an existing issue, replace the issue number. Fixes apache/skywalking#.
  • Update the CHANGES log.

@mrproliu mrproliu added this to the 0.10.0 milestone Jan 22, 2026
@mrproliu mrproliu requested review from Copilot and hanahmily January 22, 2026 03:30
@mrproliu mrproliu added the enhancement New feature or request label Jan 22, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the property model’s on-disk layout to be group-aware, so shards are stored under per-group directories, and adjusts repair/snapshot/gossip logic and tests accordingly.

Changes:

  • Store property shards under <property-dir>/<group>/shard-<id>/... and load/query/delete across group-scoped shard sets.
  • Update repair tree building, snapshotting, and repair gossip flows to operate with group-scoped shards/snapshots.
  • Refresh generated API reference docs and add a breaking-change note to CHANGES.md.

Reviewed changes

Copilot reviewed 11 out of 11 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
docs/api-reference.md Updates generated API docs for TopNList.Item fields.
banyand/property/test_helper.go Updates test helper to use new loadShard(group, id) signature.
banyand/property/shard_test.go Adjusts shard tests for group-aware shard loading.
banyand/property/shard.go Writes shards under a group directory and tags metrics with group.
banyand/property/repair_test.go Updates repair tests for group-aware shard loading and tree reader signature.
banyand/property/repair_gossip_test.go Updates gossip tests to snapshot/load shards per group.
banyand/property/repair_gossip.go Loads shards by group and uses the updated tree reader API.
banyand/property/repair.go Adjusts repair tree/state paths and rebuild logic for the new layout.
banyand/property/listener.go Takes snapshots into group-scoped directories.
banyand/property/db.go Introduces group→shards mapping and routes operations by group.
CHANGES.md Documents the breaking storage layout change.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 316 to 317
(*groupsMap)[group] = gs
db.groups.Store(groupsMap)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getOrCreateGroupShards mutates the existing map[string]*groupShards in-place ((*groupsMap)[group] = gs) and then re-stores the same map pointer. Callers like delete/query/collect/close iterate over *groupsMap without holding db.mu, so this can trigger concurrent map iteration and map write panics. Use copy-on-write (allocate a new map, copy entries, add the new group, then Store the new pointer) or protect all reads with the same lock used for writes.

Suggested change
(*groupsMap)[group] = gs
db.groups.Store(groupsMap)
// Use copy-on-write: create a new map, copy existing entries, then add the new group.
oldMap := *groupsMap
newMap := make(map[string]*groupShards, len(oldMap)+1)
for k, v := range oldMap {
newMap[k] = v
}
newMap[group] = gs
db.groups.Store(&newMap)

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use sync.Map to fix it.

Comment on lines 289 to 295
gs.mu.Lock()
sLst := gs.shards.Load()
if sLst == nil {
sLst = &[]*shard{}
}
*sLst = append(*sLst, sd)
db.sLst.Store(sLst)
newList := append(*sLst, sd)
gs.shards.Store(&newList)
Copy link

Copilot AI Jan 22, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

newList := append(*sLst, sd) can reuse the backing array of *sLst and mutate it in-place. Since readers access gs.shards.Load() without taking gs.mu, this can cause data races / inconsistent reads. Build the new slice with a guaranteed new backing array (e.g., copy to a fresh slice before appending) before storing it.

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix it.

Comment on lines 289 to 295
gs.mu.Lock()
sLst := gs.shards.Load()
if sLst == nil {
sLst = &[]*shard{}
}
*sLst = append(*sLst, sd)
db.sLst.Store(sLst)
newList := append(*sLst, sd)
gs.shards.Store(&newList)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fix it.

Comment on lines 316 to 317
(*groupsMap)[group] = gs
db.groups.Store(groupsMap)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use sync.Map to fix it.

@codecov-commenter
Copy link

codecov-commenter commented Jan 22, 2026

Codecov Report

❌ Patch coverage is 50.00000% with 103 lines in your changes missing coverage. Please review.
✅ Project coverage is 47.19%. Comparing base (3530dd9) to head (9ddd2fc).
⚠️ Report is 120 commits behind head on main.

Files with missing lines Patch % Lines
banyand/property/db.go 53.40% 35 Missing and 6 partials ⚠️
banyand/property/listener.go 0.00% 30 Missing ⚠️
banyand/property/repair.go 64.78% 15 Missing and 10 partials ⚠️
banyand/property/repair_gossip.go 50.00% 3 Missing and 3 partials ⚠️
banyand/property/test_helper.go 0.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #955      +/-   ##
==========================================
+ Coverage   45.97%   47.19%   +1.21%     
==========================================
  Files         328      383      +55     
  Lines       55505    59611    +4106     
==========================================
+ Hits        25520    28131    +2611     
- Misses      27909    28849     +940     
- Partials     2076     2631     +555     
Flag Coverage Δ
banyand 49.65% <50.00%> (?)
bydbctl 81.91% <ø> (?)
fodc 78.59% <ø> (?)
integration-distributed 80.00% <ø> (?)
pkg 29.32% <ø> (?)

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 12 out of 12 changed files in this pull request and generated 2 comments.

Comments suppressed due to low confidence (1)

banyand/property/listener.go:272

  • This always returns a Snapshot{Name: sn,...} even if no shards were actually snapshotted (e.g., db.groups is empty or all gs.shards are nil), which means no snapshot directory gets created. Previously the code returned a nil payload when there were no shards. Consider returning nil/an error Snapshot in the “nothing snapshotted” case or creating the snapshot directory structure up-front.
	return bus.NewMessage(bus.MessageID(time.Now().UnixNano()), &databasev1.Snapshot{
		Name:    sn,
		Catalog: commonv1.Catalog_CATALOG_PROPERTY,
	})

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@hanahmily hanahmily merged commit 5e585bc into apache:main Jan 22, 2026
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants