You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently a lot of functions, like reorder, simplify, or resample, which have a seperate algorithm kernel, in js or wasm, but not depending on the property graph of the gltf document, and it's processed per primitive (or accessor), so it could be (at least in theory) multithreaded.
Modern cpus are switching focus from single-thread performance to more and more cores for many years, so this could accelerate the processing time by multiple times.
For small accessors, maybe less than 100k, just run in the main thread, as thread creation, wasm initialization, data cloning also costs, but in case of large one, a thread pool could be used to parallelize the processing of the kernel.
Another concern is the memory, multiple thread consumes multiple times the memory, as the v8 hard limits the heap to 4gb (and with the increasing price of ram), the size of the pool and the "threading threshold" should be configurable, but with a reasonable default value (like navigator.hardwareConcurrency / 2 - 1).
In case of api, current doc.transform api is already async, so no breaking api change would be needed.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
Uh oh!
There was an error while loading. Please reload this page.
-
Currently a lot of functions, like reorder, simplify, or resample, which have a seperate algorithm kernel, in js or wasm, but not depending on the property graph of the gltf document, and it's processed per primitive (or accessor), so it could be (at least in theory) multithreaded.
Modern cpus are switching focus from single-thread performance to more and more cores for many years, so this could accelerate the processing time by multiple times.
For small accessors, maybe less than 100k, just run in the main thread, as thread creation, wasm initialization, data cloning also costs, but in case of large one, a thread pool could be used to parallelize the processing of the kernel.
Another concern is the memory, multiple thread consumes multiple times the memory, as the v8 hard limits the heap to 4gb (and with the increasing price of ram), the size of the pool and the "threading threshold" should be configurable, but with a reasonable default value (like
navigator.hardwareConcurrency / 2 - 1).In case of api, current
doc.transformapi is already async, so no breaking api change would be needed.Beta Was this translation helpful? Give feedback.
All reactions