Replies: 1 comment
-
You might be interested in following #5221. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I have an application where I want to efficiently render high resolution video streams in 3D space.
On desktop, this isn't much of a problem*, but on the web it becomes rather difficult. In my experience, simply letting each 4K video frame be copied to the GPU from JS ends up being very expensive, basically taking any reasonable frame budget for 30 or 60fps and using all of it or more.
*oversimplification, but it should be possible to do efficiently without modifying Bevy.
There is a way to do this in JS + WebGL 2, which is to use the browser API's ability to reuse the current image content of a HTML canvas, video element (and in Chrome, an experimental
VideoFrame
object created by directly decoding an encoded video stream using WebCodecs). The browser handles the decoding of the video stream and allows efficient hand-off of the texture to a WebGL 2 (or WebGPU) program. If done correctly, you can avoid copying the texture back and forth from the GPU, saving tons of frame budget time.three.js abstracts this texture type as a
VideoTexture
, but under the hood it's using atexSubImage2D
WebGL 2 call, which can take some HTML elements as sources (video/canvas).Some complications
Unfortunately, it gets a little more complicated, so it's a bit tricky still to get this right (
texImage2D
might need to get used instead oftexSubImage2D
), in practice, this could defeat the purpose of this entire discussion. Maybe the corresponding WebGPU function doesn't have these issues, but I'm doing my development on Linux so I haven't had much of a stable WebGPU implementation to test yet.Edit: I tested WebGPU video textures (supplied a 4K video instead of the example) on a M1 Macbook Pro and it doesn't seem to stutter at all.
It sounds like there is a way to get it to work efficiently, but the exact incantation could be specific enough that it's not worth having as an engine feature.
wgpu has some functions that allow you to smuggle textures from what they refer to as "external images", one function that exists is
copy_external_image_to_texture
.In my perusal of the current way that Bevy treats textures, everything goes through
Image
, which is designed around the idea that thedata
it contains is actually texture data. I figured that I didn't want to diverge from that, as I want to just use the video texture in any context that Bevy can normally useImage
.I was able to cobble together a really gross hack that allowed me to pipe through a texture from a video. I patched
Image
to have anexternal
flag, and inprepare_asset
ifexternal
is set, it queries the DOM and passes a canvas element tocopy_external_image_to_texture
. I repurposeddata
to store the element ID of the canvas. My hack always used a canvas, so more hacks would probably be needed in order to make it work for the otherExternalImageSource
types. Lastly, something needs to come around and touch theImage
asset every frame to get it toprepare_asset
again. Altogether, this technically works, but is definitely not optimal.I wanted to start a discussion of how something like this could be implemented in Bevy in a way that isn't really gross. It's bad enough that my hack requires changing the
Image
struct, but that's not the only shortfall. I am not very familiar yet with Bevy internals, and so I figure some of you might have slicker implementation ideas 😄.Beta Was this translation helpful? Give feedback.
All reactions