Skip to content

Conversation

@anunaym14
Copy link
Member

@anunaym14 anunaym14 commented Jun 16, 2025

I'm not sure how to test this properly, but would like a code review in a general sense.

The new Handler does both jitter handling and decoding in one places and uses the existing jitter buffer and opus decoder.

@anunaym14 anunaym14 requested review from boks1971 and dennwc June 16, 2025 12:36
func (r *opusJitterHandler) handleRTP(p *rtp.Packet) {
isDtx := len(p.Payload) == opusDTXFrameLength

// Not sure what to do if we have a pending loss and the packet is DTX.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It would be good to use RTP timestamp irrespective of DTX or not. DTX packet also should have an RTP Timestamp.

But, this interface may be tricky. How does it work now? Is this pushing packets to the app or the app pulling? Ideally, the app should be pulling every x ms and this code should check for available data and if nothing available should use plc/fec as applicable.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is currently pushing to the app, jitter buffer does it's thing, the handler identifies the gaps and fills them with silence/FEC/PLC and writes to the application.

We're using the writer design SDK side, so it's all pushing to the app instead of the app polling it

silenceBuf := make([]int16, silenceSamples*r.decoder.targetChannels)
if err := r.decoder.w.WriteSample(silenceBuf); err != nil {
r.logger.Warnw("failed to write silence", err)
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the decoder need to be filled with silence samples. That does not sound good. The decoder should have updated with PLC/FEC samples and should not ideally need filling with silence.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is directly writing to the PCMWriter that the decoder writes to after decoding

if lostPackets > 1 {
// For mono audio, if we call DecodePLC right after a
// SFU generated mono silence, the concealment might not be proper.
// But, we need to pass the buffer for the exact duration of the lost audio.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not able to understand this comment.

Ideally, there should be no dependence on what SFU does. Ideally, this should not even know that it is interacting with SFU. It is just getting Opus packets and needs to deal with it.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically:

  • We receive stereo audio from publisher
  • Decoder state is stereo
  • We receive mono silence from SFU
  • Decoder state becomes mono
  • We then call DecodePLC, which should use the last decoder state to get some data for concealment, and that last state is mono even though the lost packet could've been stereo

Actually worth checking how libwebrtc handles this

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That is okay, if the lost packet is before the mono silence, it should not be concealed post that. Concealment should happen when it is detected. Again, pull model will make this simpler.


// Should we reset for the next packet before calling DecodeFEC?
// This will update the decoder's state for the next packet so it might help.
// But, it might also cause some issues if the next packet is SFU generated silence.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should make this simple.

Ideally, there is a pull and if there is data, it gets decoded, if not silence. So, maybe we need two more changes here

  1. jitter buffer in pull mode - looks like it is push mode, make it pull
  2. maybe write a module like the samplewriter which just loops and pulls data from provider for publish. Similarly, have a loop which pulls data from remote track and pushes to app. That can then apply decode or packet loss concealment.

In this push mechanism, what happens if there is a burst loss of 300 ms? There will be no handling till the next packet arrives. At that time, it will do a large conceal it looks like.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The entire pipeline is in push mode right now with the writer design. In the current implementation (regular jitter buffer, no concealment), it is push as well.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think we may have to build a pull mode to make this cleaner. Otherwise, doing concealment after a burst loss is already too late. The time has passsed for playing out the missing audio.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this push mechanism, what happens if there is a burst loss of 300 ms? There will be no handling till the next packet arrives. At that time, it will do a large conceal it looks like.

Good point, initially I was doing only FEC and it needed the next packet, so I had this approach. Then, I realized if multiple packets are lost, then it can only recover the last one with FEC, n-1 packets needed PLC. But yea currently, it does a large conceal when the next valid packet is popped from the jitter buffer.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants