Skip to content

Conversation

@JordonPhillips
Copy link
Contributor

This adds a class that allows bytes to be asynchronously exchanged. This primarily serves the purpose of enabling event streaming. Event streams will create a provider under the hood that will be written to. That provider will then be passed as the trasnport request body, which will read it as an async bytes iterable.

This will also fail until the protocol test suppression is merged.

By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.

@JordonPhillips JordonPhillips requested a review from a team as a code owner October 18, 2024 13:55
@JordonPhillips JordonPhillips changed the base branch from develop to suppress-new-protocol-tests October 18, 2024 13:58
@JordonPhillips JordonPhillips force-pushed the async-write-provider branch 2 times, most recently from 00a1103 to cca9900 Compare October 18, 2024 14:00
Base automatically changed from suppress-new-protocol-tests to develop October 18, 2024 15:40
This adds a class that allows bytes to be asynchronously exchanged.
This primarily serves the purpose of enabling event streaming. Event
streams will create a provider under the hood that will be written
to. That provider will then be passed as the trasnport request body,
which will read it as an async bytes iterable.
"""

def __init__(
self, intial_data: bytes | None = None, max_buffered_chunks: int = 16
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just curious why/how we arrived at 16 for buffering? Do we have something informing that or is it just an arbitrary default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's arbitrary. Ultimately there's only one coroutine reading, and N coroutines writing. If N is greater than 1, the buffer will grow. That said, the reader is going to work very quickly since it just dumps the data into a BytesIO immediately. Writers will likely be slower to push data. And I don't expect there to be that many writers to a single stream anyhow.

Depending on how real world usage ends up, we might be able to improve performance by having separate Conditions for readers and writers that share a lock. That may result in the side with fewer participants cycling through their queue faster. But I'm not certain about that, and it seems premature. I'd rather wait until we have actual benchmarks on a live service before adding that extra complication.

Co-authored-by: Nate Prewitt <[email protected]>
Copy link
Contributor

@nateprewitt nateprewitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

:shipit: once tests pass.

@JordonPhillips JordonPhillips merged commit d570363 into develop Oct 18, 2024
5 checks passed
@JordonPhillips JordonPhillips deleted the async-write-provider branch October 18, 2024 20:47
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants