Skip to content

Conversation

benma
Copy link
Collaborator

@benma benma commented Sep 22, 2025

@NickeZ the two middle commits are not usable by themselves, but in separate commits for easier review. Will squash in the end.

There is a brief flash of the "See the BitBoxApp" after unlock is done, before it switches to the normal BitBox02 logo screen. This was present before this PR, but it lingers a bit longer after this PR. I already have a prototype of how to remove that flash completely, I'll open a separate PR for that after this one. edit: fixed in #1618

@benma benma force-pushed the async-bip39 branch 5 times, most recently from 0e925b3 to f71fe57 Compare September 29, 2025 09:31
@benma benma requested a review from NickeZ September 29, 2025 09:39
@benma benma marked this pull request as ready for review September 29, 2025 09:39
salt = hmac::Hmac::from_engine(prfc).to_byte_array();

xor(chunk, &salt);
yield_now(i).await;
Copy link
Collaborator

@NickeZ NickeZ Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm just a little bit skeptical about passing in i. won't i be a sequence like 1,2,3,2,3,3 for example if c = 4? Isn't that a very awkward interface? Wouldn't it be better with something linear so that you can make a progress bar or whatever you want?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. c is 2048, the bip39 stretch count, and there is only one chunk in bip39 seeds, so i will go from 0..2048.

Maybe I'll simply omit that parameter altogether, seeing we don't actually make use of it. I guess hosts can emulate it by incrementing a counter inside the callback if they need to keep track for some reason.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

})
}

/// Executes both futures in tandem, returning the results of both when both are done.
Copy link
Collaborator

@NickeZ NickeZ Oct 6, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I was debating if I should pull in a dep or have a local implementation. Do you have a preference?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would lean towards futures_light as long as we don't already have like futures in our dependency tree.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removed custom yield_now/join and added futures_light dep. Binary size is even 384 bytes smaller now 🤷

// render is called only every 30th iteration), if we want both to finish at the same time, the
// slowdown factor becomes the following. 10% is added so the animation takes a bt longer than the
// actual unlock.
#define SLOWDOWN_FACTOR (1.1f * (2048 / ((float)LOCK_ANIMATION_N_FRAMES * (float)SCREEN_FRAME_RATE)))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is very close to 2. Could we set it to that instead, to avoid floats?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the issue with floats exactly?

I can do it and move the formula to the comments, but if one wanted to tune it differently, it will not work without going back to floats.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

static const component_functions_t _component_functions = {
.cleanup = ui_util_component_cleanup,
.render = _render,
.on_event = ui_util_on_event_noop,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

on_event is allowed to be NULL, so no need to call a noop implementation.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

Copy link
Collaborator

@NickeZ NickeZ left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tACK, wrote a few comments that you can answer/address before merging

We use our fork that has two custom patches:
- fix bitcoin_hashes transitive dep version to avoid duplicates
- add async version of the function to derive a bip39 seed

This will be used to convert our BIP39 unlock to be an async
operation, not blocking the mainloop.
@benma benma force-pushed the async-bip39 branch 3 times, most recently from 5e8e2a2 to ad83bf7 Compare October 10, 2025 07:07
benma added 3 commits October 10, 2025 09:09
This uses `to_seed_normalized_async(...).await` over
`to_seed_normalized(...)` in bip39 unlocking, propagating the
async/await keywords up the stack.

This commit by itself is not functional yet, as the unlock animation
is still timer-interrupt based, which leads to chaos. The next commit
converts the animation into an async task of its own, not depending on
interrupts.

The bip39 unlock loop is made to yield to the executor in each of the
2048 PBKDF2 stretch rounds. In the simulator however, we don't yield
and finish the computation in a blocking fashion like before, due to a
limitation of the simulator: it does not busy-loop the
mainloop (otherwise CPU would be at 100%), but only when there is an
incoming USB packet, so yielding in BIP39 would make unlocking in the
simulator *very* slow. Running the mainloop quicker in the simulator
does not work well: either CPU load is too high, or unlock is too slow.
Since it's async now, there should be no need to increase timeout.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants