-
-
Notifications
You must be signed in to change notification settings - Fork 237
pool wire write buffer #2799
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pool wire write buffer #2799
Changes from 22 commits
bafac99
6f2abf1
ba35075
2236f90
f5fd9b4
68d431c
5fd4a67
6e3c3fe
281efd8
ee7dc63
fcae966
ba830b2
e5e4a5a
b362f57
b2e0e80
d17b1c0
49a0a99
5982731
112556b
fc476f6
3ce2865
3a7e950
210c5c7
4e37c3b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,57 @@ | ||
| package sql | ||
|
|
||
| import ( | ||
| "sync" | ||
| ) | ||
|
|
||
| const defaultByteBuffCap = 4096 | ||
|
|
||
| var ByteBufPool = sync.Pool{ | ||
| New: func() any { | ||
| return NewByteBuffer(defaultByteBuffCap) | ||
| }, | ||
| } | ||
|
|
||
| type ByteBuffer struct { | ||
| i int | ||
| buf []byte | ||
| } | ||
|
|
||
| func NewByteBuffer(initCap int) *ByteBuffer { | ||
| buf := make([]byte, initCap) | ||
| return &ByteBuffer{buf: buf} | ||
| } | ||
|
|
||
| // Grow records the latest used byte position. Callers | ||
| // are responsible for accurately reporting which bytes | ||
| // they expect to be protected. | ||
| func (b *ByteBuffer) Grow(n int) { | ||
| if b.i+n > len(b.buf) { | ||
|
||
| // Runtime alloc'd into a separate backing array, but it chooses | ||
| // the doubling cap using the non-optimal |cap(b.buf)-b.i|*2. | ||
| // We do not need to increment |b.i| b/c the latest value is in | ||
| // the other array. | ||
| b.Double() | ||
| } else { | ||
| b.i += n | ||
| } | ||
| } | ||
|
|
||
| // Double expands the backing array by 2x. We do this | ||
| // here because the runtime only doubles based on slice | ||
| // length. | ||
| func (b *ByteBuffer) Double() { | ||
| buf := make([]byte, len(b.buf)*2) | ||
| copy(buf, b.buf) | ||
| b.buf = buf | ||
| } | ||
|
|
||
| // Get returns a zero length slice beginning at a safe | ||
| // write position. | ||
| func (b *ByteBuffer) Get() []byte { | ||
| return b.buf[b.i:b.i] | ||
| } | ||
|
|
||
| func (b *ByteBuffer) Reset() { | ||
| b.i = 0 | ||
| } | ||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This idea has merit and is surely better than allocating a buffer for each request but the way you're managing the memory is suboptimal. Also good to use the same backing array for multiple values in a request.
In the main use of this object in the handler, you're getting a zero-length slice (which has some larger backing array) and then calling
appendon it byte by byte. This will grow the backing array in some cases, but it's not being done under your deliberate control. Rather, you're then callingDoubleif the backing array is low on space after the appends have already happened.Basically: in these methods, you are referring to the
lenof the byte slice, when your concern is usually thecap. It's fine to letappendhappen byte by byte as long as they aren't doubling the backing array too often, that's the expensive bit.I think this would probably work slightly better if you just scrapped the explicit capacity management altogether and just let the Go runtime manage it automatically for you. Either that, or always manage it explicitly yourself, i.e. before you serialize a value with all those
appendcalls. But it's not clear to me that manual management is actually any better if you use the same strategy as the go runtime does (double once we're full).There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree with all of this, but there are two caveats that limit our ability to let the runtime handle this for us. (1) The runtime chooses doubling based on the cap of the slice, not the full backing array. So for the current setup, the doubled array is usually actually smaller than the original backing array. (2) Doubled arrays are not reference swapped, we need a handle to the new buffer to know when to grow.
I'm not aware of how to override the default runtime growth behavior to ignore the slice cap and instead double based on the backing array cap. So
BytesBufferstill does the doubling, and aGrow(n int)interface to track when this should happen. We pay for 2 mallocs on doubling, because the first one is never big enough. Not callingGrowafter allocing, or growing by too small of length compared to the allocations used will stomp previously written memory.