Replies: 2 comments
-
Pointers for Code fix@r-zig: Thanks for sharing the code — it’s very helpful and well structured. You're clearly encoding messages using From your description and the error:
this strongly indicates that
🔍 Let’s analyze both ends step-by-step:✅ Sender Side Highlights:const buffer = FileTransferMessage.encodeDelimited(message).finish();
await this.peerDialer.send(buffer, peerId, this.fileTransferProtocol); This is correct — The issue is likely not here unless
|
Beta Was this translation helpful? Give feedback.
-
@r-zig: Hi Ron. Thanks for the detailed follow-up — and great job digging deeper into the message behavior! Your observations are sharp and right on target for the kind of edge cases that show up with stream multiplexing and transport abstractions like WebRTC over WebSocket with a libp2p relay in the middle. Let me break it down and offer some guidance based on your findings: 🔍 What You’re Observing
✅ Let’s Address It Step-by-Step1. WebRTC over WebSocket via libp2p-relay: Is out-of-order delivery possible?In general:
So under normal conditions, you should not see out-of-order delivery. 2. But: Message Fragmentation and Stream Reassembly Can Appear as CorruptionYour description sounds less like out-of-order arrival and more like stream framing confusion, particularly when:
You mentioned using This would explain:
🧪 Your Delay Test Confirms It
This supports the theory: the stream reader is consuming data too early, not respecting the message boundaries, and mixing the second message's start into the first. ✅ Recommendations1. Ensure proper framing on the receiverDouble-check that your receiver is not manually buffering or decoding messages without using length-prefixed framing. The safest approach is: import lp from 'it-length-prefixed'
async *readMessages(stream) {
const source = lp.decode()(stream.source)
for await (const buf of source) {
yield FileTransferMessage.decode(buf)
}
} This will ensure that each message starts and ends cleanly, even if multiple messages arrive back-to-back in a single packet. 2. Avoid writing to stream too fast without flushingIf you’re using custom pushables or iterables, ensure that:
Sometimes, adding 3. Sanity check with a hex dumpTry logging: console.log('Sent buffer:', buffer.toString('hex').slice(0, 32)) and similarly log what you receive. Compare the first 32–64 bytes to ensure they match. If not, the stream is likely being misread (not a transmission error, but a framing issue). ✅ SummarySo in short:
You're almost there — the good news is that this kind of issue, while subtle, has a well-defined fix once framing is handled cleanly. Let me know if you’d like a full patched |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
@r-zig : Hi Ron. Thanks so much for reaching out and for sharing the details of your work — it's awesome to hear you're making progress on stream handling over WebRTC in
js-libp2p
. That’s a tricky but rewarding area, and it’s great to see someone digging into the internals of stream multiplexing and encoding!You're absolutely on the right track in expecting the decoding logic (like in
it-protobuf-stream
) to handle stream boundaries by first reading the length-prefix and then pulling the correct number of bytes. That's typically how message framing works over libp2p streams, especially when protobuf or other length-delimited encodings are in play.The issue you're seeing (
Reader.create_typed_array [as create]
) usually occurs when the internal buffer doesn't yet contain enough bytes to fulfill a full read, especially after the length prefix has been read but not the payload. In cases like this,it-protobuf-stream
or any async iterable parser will throw if the stream ends prematurely or pauses indefinitely without delivering the expected payload size.A few thoughts and suggestions to consider:
✅ Expectations from
it-protobuf-stream
You're correct that it's meant to handle length-prefixed streams — it reads the length prefix and then buffers until that many bytes are available before decoding. However, this assumes a "clean" stream — meaning each message is fully encoded and sent as a single, contiguous chunk.
If your sender is chunking or flushing too early (before the full message is passed through), or if the transport is dropping or delaying data, it could cause
it-protobuf-stream
to throw due to incomplete reads.🔍 Things to check
Ensure full message is written
If you're using something like
stream.sink(pushableSource)
orawait writer.write(message)
, make sure:Use framing explicitly
If you're not using
it-protobuf-stream.encode()
on the sender side, you might want to ensure that you wrap your outgoing messages with a proper length-prefix so thatdecode()
on the receiver side knows what to expect.Example:
Fallback to
it-length-prefixed
Some libp2p streams just use
it-length-prefixed
to wrap byte payloads — especially if the actual encoding (e.g. protobuf, CBOR) is applied separately. You might want to try decoding the raw stream withit-length-prefixed
first to see if the message boundaries are what you expect:🧭 What do libp2p people use?
You're right to ask —
libp2p
generally doesn’t enforce a single encoding format. Developers often use:it-length-prefixed
for framingit-protobuf-stream
or custom stream transformers (likeit-json
,it-msgpack
) for encoding/decoding structured dataSo your usage of
it-protobuf-stream
is valid, but it's worth checking if:🙌 Happy to help debug
If you want, feel free to share more of the sender and receiver logic — especially how you're encoding and writing to the stream — and I’d be happy to help debug further.
Thanks again for working on this and reaching out — the
js-libp2p
ecosystem needs contributors like you who aren't afraid to get into the gritty details of stream protocols!Best,
Manu
Beta Was this translation helpful? Give feedback.
All reactions