Why do the codec::Encoder and codec::Decoder traits operate on &[u8] and Vec<u8> instead of Bytes / BytesMut?
#576
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I’m working on a use case that involves interacting with RESP (Redis Serialization Protocol), and I’m using the redis-protocol crate—specifically the BytesFrame type.
I’m integrating this with compio’s codec API, whose
Decodertrait is defined as:The
redis_protocol::decode_bytesAPI expects a&Bytes, so in my current implementation I’m forced to convert the incoming&[u8]into an ownedByteson every call:This works functionally, but it requires copying the entire input buffer into a new
Bytesallocation on every decode, which is potentially expensive—especially for a streaming protocol like RESP.My questions are:
Decodertrait’s&[u8]input and ownedItemoutput?redis-protocol’sdecode_byteswith compio’s framing/codec infrastructure?BytesMut) that would allow incremental decoding and buffer advancement, rather than re-decoding from a copied slice each time?In other words, am I fundamentally constrained by the
Decodertrait’s signature here, or is there a better approach I should be using for RESP-style streaming decoding?Beta Was this translation helpful? Give feedback.
All reactions