Conversation
djeedai
left a comment
There was a problem hiding this comment.
Thanks for the update, but I think the entire change is wrong.
As far as I understand Bevy has switched to managing internally wgpu::BindGroupLayout and encourages to use PipelineCache which takes a "key" (in the form of the descriptor) to lookup bind group layouts and lazy create them. However, Hanabi already has its own caching mechanism, and it looks like this change is basically adding Bevy's cache on top of Hanabi's, duplicating all the work.
Unfortunately I don't have time to go discuss with the rendering team trying to figure out how to make that change. If someone has context on how to approach that upgrade, I'm eager to hear about it.
|
@pcwalton mentioned he also has a PR ready for the update. I think we should go with his version. |
|
Tried this out for a bit and it seems to work for WebGPU! So for anyone wondering which Hanabi flavor to use, I recommend:
|
|
Updated via #521, so closing this. Thanks for the initial work @morgenthum, this helped me with a few changes. In the end I had to switch to |
I updated bevy_hanabi in a one-shot prompt with gpt-5.2-codex because I need it for my game (and it works in my game). I reviewed the generated code, and it looks good to me. However, I am not deep into rendering, so I really can't tell if anything in render.rs is broken. Please be careful with this pull request! Just wanted to share it, if someone is interested.
All tests pass und the examples work.