Technical Proposal: Extrapolated Motion Prediction (EMP) for IR Signal Loss Overview #137
sephirothbdt-svg
started this conversation in
Ideas
Replies: 1 comment
-
|
Hiya! Thank you for the feedback. For matters like this where suggestions for code are involved, it's the responsibility of the submitter to provide a PR to implement their ideas in a way that's actually actionable - this can be either one that's ready to be merged, or marked as a Draft to keep track of development as it progresses. That is the spirit of open source, after all! Also, since this is your first time submitting to the repo/org, it should be noted that, currently, we do not accept AI contributions without a disclaimer indicating as such. So we hope you take care to keep these things in mind when contributing! btw, your formatting is scuffed because you didn't wrap it properly in code blocks. Please use GH's Markdown when quoting/pasting code. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Currently, when the Camera loses sight of the IR LEDs (due to extreme angles, occlusion, or distance), the cursor position typically freezes or resets. This proposal implements a Linear Extrapolation with Damping algorithm. It allows the firmware to "guess" the cursor's path for a brief window based on its last known velocity, significantly improving the experience at the edges of the screen.
These variables maintain the "momentum" of the cursor between frames.
C++
// Add to your main state variables or class members
float lastDeltaX = 0.0f; // Last recorded horizontal velocity
float lastDeltaY = 0.0f; // Last recorded vertical velocity
int lostFramesCount = 0; // Counter for consecutive frames without IR lock
// --- Tweakable Parameters ---
const int PREDICTION_TIMEOUT = 15; // Max frames to predict (approx 120-150ms)
const float DAMPING_FACTOR = 0.85f; // Deceleration multiplier per frame
const float VELOCITY_THRESHOLD = 0.5f; // Minimum velocity to trigger prediction
Explanation: * lastDelta: Stores the speed and direction of the last valid movement.
This logic should be placed in the main loop, immediately after the IR data is processed and before the USB report is sent.
C++
// Logic to be integrated into the main loop
void processMotionPrediction(float ¤tX, float ¤tY, bool ledFound) {
if (ledFound) {
// 1. SIGNAL OK: Calculate real-time velocity
lastDeltaX = currentX - lastValidX;
lastDeltaY = currentY - lastValidY;
}
Explanation: If the sensor reports no points, the code takes over. It adds the lastDelta to the current position. Because we multiply by the DAMPING_FACTOR (0.85), the movement becomes smaller every frame, creating a natural "stop" rather than a jittery jump.
3. Jitter Reduction (Exponential Moving Average)
To ensure the prediction starts from a "clean" velocity, we can apply a simple low-pass filter to the output.
C++
// Smoothing factor (0.0 to 1.0). Lower = smoother but more lag.
const float SMOOTHING = 0.75f;
float smoothX = (currentX * SMOOTHING) + (previousX * (1.0f - SMOOTHING));
float smoothY = (currentY * SMOOTHING) + (previousY * (1.0f - SMOOTHING));
Explanation: This reduces "sensor noise" (shaking hands or IR interference). Combining this with prediction ensures that when the signal is lost, the predicted path isn't based on a random noise spike.
4. Integration with OpenFIRE Preferences
To make this user-adjustable, the following keys should be added to the OF_Prefs system found in OpenFIREcommon.cpp:
Beta Was this translation helpful? Give feedback.
All reactions