-
Notifications
You must be signed in to change notification settings - Fork 1.2k
🐛 priority queue: Fix panic within spin #3058
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -2,6 +2,7 @@ | |
|
|
||
| import ( | ||
| "fmt" | ||
| "math/rand/v2" | ||
| "sync" | ||
| "testing" | ||
| "time" | ||
|
|
@@ -283,6 +284,41 @@ | |
| Expect(metrics.depth["test"]).To(Equal(0)) | ||
| Expect(metrics.adds["test"]).To(Equal(2)) | ||
| }) | ||
|
|
||
| It("returns many items", func() { | ||
|
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Not sure how to verify that the spin goroutine didn't panic. Locally with Intellij, I've hit a panic break point and after continuing the test was shown as successful (even with the panic). I'm not sure if the same happens in CI
Member
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe that there was a breakpoint caused go test to not recognize it properly?
Member
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Jup maybe |
||
| // This test ensures the queue is able to drain a large queue without panic'ing. | ||
| // In a previous version of the code we were calling queue.Delete within q.Ascend | ||
| // which led to a panic in queue.Ascend > iterate: | ||
| // "panic: runtime error: index out of range [0] with length 0" | ||
|
|
||
| q, _ := newQueue() | ||
| defer q.ShutDown() | ||
|
|
||
| for range 20 { | ||
| for i := range 1000 { | ||
| rn := rand.N(100) | ||
| if rn < 10 { | ||
| q.AddWithOpts(AddOpts{After: time.Duration(rn) * time.Millisecond}, fmt.Sprintf("foo%d", i)) | ||
| } else { | ||
| q.AddWithOpts(AddOpts{Priority: rn}, fmt.Sprintf("foo%d", i)) | ||
| } | ||
| } | ||
|
|
||
| wg := sync.WaitGroup{} | ||
| for range 100 { // The panic only occurred relatively frequently with a high number of go routines. | ||
| wg.Add(1) | ||
| go func() { | ||
| defer wg.Done() | ||
| for range 10 { | ||
| obj, _, _ := q.GetWithPriority() | ||
| q.Done(obj) | ||
| } | ||
| }() | ||
| } | ||
|
|
||
| wg.Wait() | ||
| } | ||
| }) | ||
| }) | ||
|
|
||
| func BenchmarkAddGetDone(b *testing.B) { | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not a super strong opinion but how do you feel about instead appending to a
toDeleteslice inAscendand then callingDeleteafter being done withAscend? It should be safe because we are holding the lock so a concurrent routine seeing the item when it is supposed to be deleted shouldn't be possible.The reason is that even if it works this way now, I don't think manipluting the tree while iterating is an expected usage, even if it seems to work now there could be more bugs or it could stop working in a future version of the lib.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I ended up using that approach + your testcase in #3060 as this issue made CI fail there, hope that is okay
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yup. You're right, seems safer to just not delete within Ascend. I'll take a look at your PR. Thx for taking this over