Currently the performance of the application is borderline -- if we do a bit more work per inference, we'll definitely start to slow down some more limited devices.
First we have to profile to understand where the cycles are going. How much compute resources is each of handpose, facemesh, inference, drawing utilities, and other utilities consuming?
To make things slightly more complicated, we have a few backends that all the models could be executing on: CPU, WASM, WebGL - Read more about it here
Ideas on how to improve performance:
- instead of running per frame redraw (here) run at fixed interval (once a second?)