-
Notifications
You must be signed in to change notification settings - Fork 0
Description
The backpressure mechanism in WHATWG Streams takes some getting used to, but appears simple and powerful after a while. It remains difficult to reason about backpressure in video processing pipelines because, by definition, this backpressure mechanism stops whenever something else than WHATWG Streams are used:
- WebRTC uses
MediaStreamTrackby default. - The
VideoEncoderandVideoDecoderclasses in WebCodecs have their own queueing mechanism. VideoTrackGeneratorandMediaStreamTrackProcessorcreate a bridge between WebRTC and WebCodecs, with specific queueing rules.
There are good reasons that explain the divergence of approaches regarding streams handling across technologies. For example, see Decoupling WebCodecs from Streams. From a developer perspective, this makes mixing technologies harder. It also creates more than one way to build the same pipeline with no obvious right approach to queueing and backpressure. In processing pipelines, when should developers rely on the backpressure mechanism provided by Streams? When should developers rather rely on internal queueing mechanisms?
Would it be worthwhile to create guidelines on the ins and outs of the different approaches, as a way to inform developers and perhaps further evolutions of underlying web technologies?
Related sample code:
- Sample code that uses internal
VideoEncoder/VideoDecoderqueueing: Add WebCodecs in Worker sample webcodecs#583 - Sample code that relies on stream's backpressure mechanism: Experimenting with video processing pipelines on the web