Skip to content
Open
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 21 additions & 1 deletion generator.go
Original file line number Diff line number Diff line change
Expand Up @@ -434,7 +434,27 @@ func (g *Gen) getClockSequence(useUnixTSMs bool, atTime time.Time) (uint64, uint
// Clock didn't change since last UUID generation.
// Should increase clock sequence.
if timeNow <= g.lastTime {
g.clockSequence++
// Increment the 14-bit clock sequence (RFC-9562 §6.1).
// Only the lower 14 bits are encoded in the UUID; the upper two
// bits are overridden by the Variant in SetVariant().
g.clockSequence = (g.clockSequence + 1) & 0x3fff

// If the sequence wrapped (back to zero) we MUST wait for the
// timestamp to advance to preserve uniqueness (see RFC-9562 §6.1).
if g.clockSequence == 0 {
for {
Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this loop code be tidied up a bit, but the logic looks correct.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What change do you propose? @dylan-bourque

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking of something like this

for ; timeNow <= g.lastTime; timeNow = now() {
    time.Sleep(time.Microsecond) // or runtime.Gosched(), see my comment below
}

where now() is a locally defined helper that wraps the if/else timestamp logic

now := func() uint64 {
    epoch := g.epochFunc()
    if useUnixTSMs {
        return uint64(epoch.UnixMilli())
    }
    return g.getEpoch(epoch)
}

I'm not 100% convinced I like that better, though.

if useUnixTSMs {
timeNow = uint64(g.epochFunc().UnixMilli())
} else {
timeNow = g.getEpoch(g.epochFunc())
}
if timeNow > g.lastTime {
break
}
// Sleep briefly to avoid busy-waiting and reduce CPU usage.
time.Sleep(time.Microsecond)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

worst case this has a repeating wait of 10 microseconds.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep, that makes sense!

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found some references that say that the minimum OS timer resolution on Linux can be 50+ microseconds and as much as 15ms on WIndows. That could make this loop VERY slow. 😬

This fixes the clock sequence overflow issue, but what does it do to the benchmarks for V1, V6, and V7 values? Since we want to wait a tiny amount of time this seems like a good place for runtime.Gosched() (so we only yield the current time slice on the scheduler).

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@dylan-bourque I'm not sure this solution will give us much of a gain. This loop with time.Sleep(time.Microsecond) is called only when the clockSequence wraps (once every 16,384 UUIDs with the same timestamp), not on every NewV1/NewV6/NewV7.

I compared Sleep vs. runtime.Gosched() locally on the BenchmarkGenerator/NewV1|NewV6|NewV7 benchmarks (3 runs each, Apple M3 Pro).

Here are the benchmarks:

NewV1: Sleep ~44-45 ns/op, Gosched ~45 ns/op
NewV6: Sleep ~119 ns/op, Gosched ~119-120 ns/op
NewV7: Sleep ~145 ns/op, Gosched ~145 ns/op

}
}
}
g.lastTime = timeNow

Expand Down
70 changes: 70 additions & 0 deletions race_v1_test.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
package uuid

import (
"sync"
"sync/atomic"
"testing"
)

// TestV1UniqueConcurrent verifies that Version-1 UUID generation remains
// collision-free under various levels of concurrent load. The test uses
// table-driven subtests to progressively increase the number of goroutines
// and UUIDs generated. We intentionally let the timestamp advance (default
// NewGen) to keep the test quick while still exercising the new
// clock-sequence logic under contention.
func TestV1UniqueConcurrent(t *testing.T) {
cases := []struct {
name string
goroutines int
uuidsPerGor int
}{
{"small", 20, 600}, // 12 000 UUIDs (baseline)
{"medium", 100, 1000}, // 100 000 UUIDs (original failure case)
{"large", 200, 1000}, // 200 000 UUIDs (high contention)
}

for _, tc := range cases {
tc := tc // capture range variable
t.Run(tc.name, func(t *testing.T) {
gen := NewGen()

var (
wg sync.WaitGroup
mu sync.Mutex
seen = make(map[UUID]struct{}, tc.goroutines*tc.uuidsPerGor)
dupCount uint32
genErr uint32
)

for i := 0; i < tc.goroutines; i++ {
wg.Add(1)
go func() {
defer wg.Done()
for j := 0; j < tc.uuidsPerGor; j++ {
u, err := gen.NewV1()
if err != nil {
atomic.AddUint32(&genErr, 1)
return
}
mu.Lock()
if _, exists := seen[u]; exists {
dupCount++
} else {
seen[u] = struct{}{}
}
mu.Unlock()
}
}()
}

wg.Wait()

if genErr > 0 {
t.Fatalf("%d errors occurred during UUID generation", genErr)
}
if dupCount > 0 {
t.Fatalf("duplicate UUIDs detected: %d", dupCount)
}
})
}
}
Loading