Addressing Concurrency Issues in MplexStream and YamuxStream #677
Replies: 3 comments
-
Hi @kaneki003, Thanks for the detailed explanation and for iterating on this! I totally get where you’re coming from with the workflow you initially envisioned, it makes sense to think of However, there’s a small but important misunderstanding about recv_window that we need to clear up. You mentioned decrementing recv_window in read to track the “current limit” of received data, triggering a window update when it’s low. I see why you did this, it feels intuitive to mirror the sender’s In Yamux, The sender decrements its The problem with decrementing |
Beta Was this translation helpful? Give feedback.
-
Thanks @Paschal for the detailed explanation.Regarding the In your idea above,if i understand it correctly is that the Also,your concern regarding if self.recv_window <= DEFAULT_WINDOW_SIZE // 2:
logging.debug(
f"Stream {self.stream_id}: "
f"Low recv_window ({self.recv_window}), sending update"
)
await self.send_window_update(None, skip_lock=True) This can be understood like this,suppose the Also,taking the worst case as zero increment_value = DEFAULT_WINDOW_SIZE - self.recv_window If possible,could you also highlight the case/example ,where wrong increment value is being calculated. |
Beta Was this translation helpful? Give feedback.
-
Hi @kaneki003, Thanks for the detailed reply and for explaining why you held off on the On the
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Description:
We’re working on improving the thread-safety and data integrity of
MplexStream
andYamuxStream
in PR #639 ([link to PR]). This discussion thread is to gather feedback, clarify technical details, and explore the best solutions for handling concurrency issues in these stream multiplexers. Thanks to @seetadev for suggesting we move the detailed conversation here to benefit related initiatives.Overview of PR #639
The PR addresses interleaving issues during concurrent reads and writes in
MplexStream
andYamuxStream
. Here’s a summary:What Was Wrong? (Issue #639)
MplexStream
andYamuxStream
could lead to data interleaving or race conditions, potentially causing data corruption.MplexStream
lacks robust synchronization, making it prone to issues withtrio.MemoryReceiveChannel
.YamuxStream
has existing synchronization (window_lock
andstreams_lock
), but a flow control bug was identified when sending large messages (>DEFAULT_WINDOW_SIZE
).How Was It Fixed?
read_lock
andwrite_lock
attributes to ensure serialized access during reads and writes.send_window_update
whererecv_window
wasn’t updating correctly, causing deadlocks with large messages.read
method to handle window updates properly and removed incorrect window updates from thewrite
method.write
method to avoid lock release errors.MplexStream
and their impact on thread-safety.test_yamux_read_write_lock.py
) demonstrate race conditions without proper synchronization.Current Status
YamuxStream
changes and evaluate whether locks are needed.Next Steps
recv_window
handling in theread
method.Get Involved
Please share your thoughts, test cases, or benchmarking results If you’ve worked on similar concurrency issues in
py-libp2p
or other networking libraries, your insights would be invaluable. Let’s collaborate to make these multiplexers robust and performant.Pinging @kaneki003, @seetadev, and @acul71 for their ongoing contributions. Thanks for driving this forward
Beta Was this translation helpful? Give feedback.
All reactions