You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Added in CL 700496, freeSomeSpanSPMCs attempts to bound tail latency by
processing at most 64 entries at a time, as well as returning early if
it notices a preemption request. Both of those are attempts to reduce
tail latency, as we cannot preempt the function while it holds the lock.
This scheme is based on a similar scheme in freeSomeWbufs.
freeSomeWbufs has a key difference: all workbufs in its list are
unconditionally freed. So freeSomeWbufs will always make forward
progress in each call (unless it is constantly preempted).
In contrast, freeSomeSpanSPMCs only frees "dead" entries. If the list
contains >64 live entries, a call may make no progress, and the caller
will simply keep calling in a loop forever, until the GC ends at which
point it returns success early. The infinite loop likely restarts at the
next GC cycle.
The queues are used on each P, so it is easy to have 64 permanently live
queues if GOMAXPROCS >= 64. If GOMAXPROCS < 64, it is possible to
transiently have more queues, but spanQueue.drain increases queue size
in an attempt to reach a steady state of one queue per P.
We must drop work.spanSPMCs.lock to allow preemption, but dropping the
lock allows mutation of the linked list, meaning we cannot simply
continue iteration after retaking lock. Since there is no
straightforward resolution to this and we expect this to generally only
be around 1 entry per P, simply remove the batching and process the
entire list without preemption. We may want to revisit this in the
future for very high GOMAXPROCS or if application regularly otherwise
create very long lists.
Fixesgolang#75771.
Change-Id: I6a6a636cd3be443aacde5a678c460aa7066b4c4a
Reviewed-on: https://go-review.googlesource.com/c/go/+/709575
Reviewed-by: Michael Knyszek <[email protected]>
LUCI-TryBot-Result: Go LUCI <[email protected]>
0 commit comments