Skip to content

Commit f331212

Browse files
committed
runtime: remove batching from spanSPMC free
Added in CL 700496, freeSomeSpanSPMCs attempts to bound tail latency by processing at most 64 entries at a time, as well as returning early if it notices a preemption request. Both of those are attempts to reduce tail latency, as we cannot preempt the function while it holds the lock. This scheme is based on a similar scheme in freeSomeWbufs. freeSomeWbufs has a key difference: all workbufs in its list are unconditionally freed. So freeSomeWbufs will always make forward progress in each call (unless it is constantly preempted). In contrast, freeSomeSpanSPMCs only frees "dead" entries. If the list contains >64 live entries, a call may make no progress, and the caller will simply keep calling in a loop forever, until the GC ends at which point it returns success early. The infinite loop likely restarts at the next GC cycle. The queues are used on each P, so it is easy to have 64 permanently live queues if GOMAXPROCS >= 64. If GOMAXPROCS < 64, it is possible to transiently have more queues, but spanQueue.drain increases queue size in an attempt to reach a steady state of one queue per P. We must drop work.spanSPMCs.lock to allow preemption, but dropping the lock allows mutation of the linked list, meaning we cannot simply continue iteration after retaking lock. Since there is no straightforward resolution to this and we expect this to generally only be around 1 entry per P, simply remove the batching and process the entire list without preemption. We may want to revisit this in the future for very high GOMAXPROCS or if application regularly otherwise create very long lists. Fixes golang#75771. Change-Id: I6a6a636cd3be443aacde5a678c460aa7066b4c4a Reviewed-on: https://go-review.googlesource.com/c/go/+/709575 Reviewed-by: Michael Knyszek <[email protected]> LUCI-TryBot-Result: Go LUCI <[email protected]>
1 parent 2441645 commit f331212

File tree

4 files changed

+14
-21
lines changed

4 files changed

+14
-21
lines changed

src/runtime/mgc.go

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2046,8 +2046,7 @@ func gcSweep(mode gcMode) bool {
20462046
prepareFreeWorkbufs()
20472047
for freeSomeWbufs(false) {
20482048
}
2049-
for freeSomeSpanSPMCs(false) {
2050-
}
2049+
freeDeadSpanSPMCs()
20512050
// All "free" events for this mark/sweep cycle have
20522051
// now happened, so we can make this profile cycle
20532052
// available immediately.

src/runtime/mgcmark_greenteagc.go

Lines changed: 10 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -722,13 +722,8 @@ func (r *spanSPMC) slot(i uint32) *objptr {
722722
return (*objptr)(unsafe.Add(unsafe.Pointer(r.ring), idx*unsafe.Sizeof(objptr(0))))
723723
}
724724

725-
// freeSomeSpanSPMCs frees some spanSPMCs back to the OS and returns
726-
// true if it should be called again to free more.
727-
func freeSomeSpanSPMCs(preemptible bool) bool {
728-
// TODO(mknyszek): This is arbitrary, but some kind of limit is necessary
729-
// to help bound delays to cooperatively preempt ourselves.
730-
const batchSize = 64
731-
725+
// freeDeadSpanSPMCs frees dead spanSPMCs back to the OS.
726+
func freeDeadSpanSPMCs() {
732727
// According to the SPMC memory management invariants, we can only free
733728
// spanSPMCs outside of the mark phase. We ensure we do this in two ways.
734729
//
@@ -740,18 +735,21 @@ func freeSomeSpanSPMCs(preemptible bool) bool {
740735
//
741736
// This way, we ensure that we don't start freeing if we're in the wrong
742737
// phase, and the phase can't change on us while we're freeing.
738+
//
739+
// TODO(go.dev/issue/75771): Due to the grow semantics in
740+
// spanQueue.drain, we expect a steady-state of around one spanSPMC per
741+
// P, with some spikes higher when Ps have more than one. For high
742+
// GOMAXPROCS, or if this list otherwise gets long, it would be nice to
743+
// have a way to batch work that allows preemption during processing.
743744
lock(&work.spanSPMCs.lock)
744745
if gcphase != _GCoff || work.spanSPMCs.all == nil {
745746
unlock(&work.spanSPMCs.lock)
746-
return false
747+
return
747748
}
748749
rp := &work.spanSPMCs.all
749-
gp := getg()
750-
more := true
751-
for i := 0; i < batchSize && !(preemptible && gp.preempt); i++ {
750+
for {
752751
r := *rp
753752
if r == nil {
754-
more = false
755753
break
756754
}
757755
if r.dead.Load() {
@@ -766,7 +764,6 @@ func freeSomeSpanSPMCs(preemptible bool) bool {
766764
}
767765
}
768766
unlock(&work.spanSPMCs.lock)
769-
return more
770767
}
771768

772769
// tryStealSpan attempts to steal a span from another P's local queue.

src/runtime/mgcmark_nogreenteagc.go

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -67,8 +67,8 @@ type spanSPMC struct {
6767
_ sys.NotInHeap
6868
}
6969

70-
func freeSomeSpanSPMCs(preemptible bool) bool {
71-
return false
70+
func freeDeadSpanSPMCs() {
71+
return
7272
}
7373

7474
type objptr uintptr

src/runtime/mgcsweep.go

Lines changed: 1 addition & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -307,10 +307,7 @@ func bgsweep(c chan int) {
307307
// N.B. freeSomeWbufs is already batched internally.
308308
goschedIfBusy()
309309
}
310-
for freeSomeSpanSPMCs(true) {
311-
// N.B. freeSomeSpanSPMCs is already batched internally.
312-
goschedIfBusy()
313-
}
310+
freeDeadSpanSPMCs()
314311
lock(&sweep.lock)
315312
if !isSweepDone() {
316313
// This can happen if a GC runs between

0 commit comments

Comments
 (0)