-
Notifications
You must be signed in to change notification settings - Fork 123
fix: clock sequence race and add stress test; #219
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Changes from 8 commits
cbcc0e0
8e67d5b
c9f79a2
73dc725
ffaf33f
b73235c
4672559
9feab51
ed3abe0
f377f60
f552a2b
595eddd
a8e1333
4323cf5
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -434,7 +434,27 @@ func (g *Gen) getClockSequence(useUnixTSMs bool, atTime time.Time) (uint64, uint | |
| // Clock didn't change since last UUID generation. | ||
| // Should increase clock sequence. | ||
| if timeNow <= g.lastTime { | ||
| g.clockSequence++ | ||
| // Increment the 14-bit clock sequence (RFC-9562 §6.1). | ||
| // Only the lower 14 bits are encoded in the UUID; the upper two | ||
| // bits are overridden by the Variant in SetVariant(). | ||
| g.clockSequence = (g.clockSequence + 1) & 0x3fff | ||
|
|
||
| // If the sequence wrapped (back to zero) we MUST wait for the | ||
| // timestamp to advance to preserve uniqueness (see RFC-9562 §6.1). | ||
| if g.clockSequence == 0 { | ||
| for { | ||
| if useUnixTSMs { | ||
| timeNow = uint64(g.epochFunc().UnixMilli()) | ||
| } else { | ||
| timeNow = g.getEpoch(g.epochFunc()) | ||
| } | ||
| if timeNow > g.lastTime { | ||
| break | ||
| } | ||
| // Sleep briefly to avoid busy-waiting and reduce CPU usage. | ||
| time.Sleep(time.Microsecond) | ||
|
Collaborator
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. worst case this has a repeating wait of 10 microseconds.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Yep, that makes sense!
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I found some references that say that the minimum OS timer resolution on Linux can be 50+ microseconds and as much as 15ms on WIndows. That could make this loop VERY slow. 😬 This fixes the clock sequence overflow issue, but what does it do to the benchmarks for V1, V6, and V7 values? Since we want to wait a tiny amount of time this seems like a good place for
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. @dylan-bourque I'm not sure this solution will give us much of a gain. This loop with I compared Here are the benchmarks: |
||
| } | ||
| } | ||
| } | ||
| g.lastTime = timeNow | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,70 @@ | ||
| package uuid | ||
|
|
||
| import ( | ||
| "sync" | ||
| "sync/atomic" | ||
| "testing" | ||
| ) | ||
|
|
||
| // TestV1UniqueConcurrent verifies that Version-1 UUID generation remains | ||
| // collision-free under various levels of concurrent load. The test uses | ||
| // table-driven subtests to progressively increase the number of goroutines | ||
| // and UUIDs generated. We intentionally let the timestamp advance (default | ||
| // NewGen) to keep the test quick while still exercising the new | ||
| // clock-sequence logic under contention. | ||
| func TestV1UniqueConcurrent(t *testing.T) { | ||
| cases := []struct { | ||
| name string | ||
| goroutines int | ||
| uuidsPerGor int | ||
| }{ | ||
| {"small", 20, 600}, // 12 000 UUIDs (baseline) | ||
| {"medium", 100, 1000}, // 100 000 UUIDs (original failure case) | ||
| {"large", 200, 1000}, // 200 000 UUIDs (high contention) | ||
| } | ||
|
|
||
| for _, tc := range cases { | ||
| tc := tc // capture range variable | ||
| t.Run(tc.name, func(t *testing.T) { | ||
| gen := NewGen() | ||
|
|
||
| var ( | ||
| wg sync.WaitGroup | ||
| mu sync.Mutex | ||
| seen = make(map[UUID]struct{}, tc.goroutines*tc.uuidsPerGor) | ||
| dupCount uint32 | ||
| genErr uint32 | ||
| ) | ||
|
|
||
| for i := 0; i < tc.goroutines; i++ { | ||
| wg.Add(1) | ||
| go func() { | ||
| defer wg.Done() | ||
| for j := 0; j < tc.uuidsPerGor; j++ { | ||
| u, err := gen.NewV1() | ||
| if err != nil { | ||
| atomic.AddUint32(&genErr, 1) | ||
| return | ||
| } | ||
| mu.Lock() | ||
| if _, exists := seen[u]; exists { | ||
| dupCount++ | ||
cameracker marked this conversation as resolved.
Show resolved
Hide resolved
|
||
| } else { | ||
| seen[u] = struct{}{} | ||
| } | ||
| mu.Unlock() | ||
| } | ||
| }() | ||
| } | ||
|
|
||
| wg.Wait() | ||
|
|
||
| if genErr > 0 { | ||
| t.Fatalf("%d errors occurred during UUID generation", genErr) | ||
| } | ||
| if dupCount > 0 { | ||
| t.Fatalf("duplicate UUIDs detected: %d", dupCount) | ||
| } | ||
| }) | ||
| } | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this loop code be tidied up a bit, but the logic looks correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What change do you propose? @dylan-bourque
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking of something like this
where
now()is a locally defined helper that wraps the if/else timestamp logicI'm not 100% convinced I like that better, though.