Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 5 additions & 8 deletions src/lib/libwasm_worker.js
Original file line number Diff line number Diff line change
Expand Up @@ -300,24 +300,21 @@ if (ENVIRONMENT_IS_WASM_WORKER

emscripten_lock_async_acquire__deps: ['$polyfillWaitAsync'],
emscripten_lock_async_acquire: (lock, asyncWaitFinished, userData, maxWaitMilliseconds) => {
let dispatch = (val, ret) => {
setTimeout(() => {
{{{ makeDynCall('vpiip', 'asyncWaitFinished') }}}(lock, val, /*waitResult=*/ret, userData);
}, 0);
};
let tryAcquireLock = () => {
do {
var val = Atomics.compareExchange(HEAP32, {{{ getHeapOffset('lock', 'i32') }}}, 0/*zero represents lock being free*/, 1/*one represents lock being acquired*/);
if (!val) return dispatch(0, 0/*'ok'*/);
if (!val) return {{{ makeDynCall('vpiip', 'asyncWaitFinished') }}}(lock, 0, 0/*'ok'*/, userData);
var wait = Atomics.waitAsync(HEAP32, {{{ getHeapOffset('lock', 'i32') }}}, val, maxWaitMilliseconds);
} while (wait.value === 'not-equal');
#if ASSERTIONS
assert(wait.async || wait.value === 'timed-out');
#endif
if (wait.async) wait.value.then(tryAcquireLock);
else dispatch(val, 2/*'timed-out'*/);
else return {{{ makeDynCall('vpiip', 'asyncWaitFinished') }}}(lock, val, 2/*'timed-out'*/, userData);
};
tryAcquireLock();
// Asynchronously dispatch acquiring the lock so that we have uniform control flow in both
// cases when the lock is acquired, and when it needs to wait.
setTimeout(tryAcquireLock);
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed in b25abd5#r168975511

I prefer this to run as fast as possible if possible. I don't think it is unreasonable to expect the emscripten_lock_async_acquire to run directly with the things that it entails if the lock is free.

But if we are to do this, would queueMicrotask work here instead? That would avoid things like renders to run before we try to acquire the lock.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible, although the consensus here is that a microtask should be a short-lived task. We have no possibility to state the semantics on behalf of the user. If this was a new API, we could freely say that should be the model - but since this is an already shipped API, we cannot change/impose semantics on existing users.

I think it would be best to compose using the existing functions. For example,

if (emscripten_lock_busyspin_wait_acquire(lock, 0.5/*msecs*/))
  weHaveLock(userData);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

or

if (emscripten_lock_try_acquire(lock))
  weHaveLock(userData);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

Would give a convenient way to get the fast access path synchronously.

Performance here should be optimal whenever there is no long-lived contention. And in the case there is > 0.5msec contention, latency will be on the slow path in any case since emscripten_lock_async_acquire() will yield to the event loop (the fact that it performs a single extra CAS will not be observable.. e.g. emscripten_lock_busyspin_wait_acquire() will have performed millions of CASes already, one more won't matter)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

although the consensus here is that a microtask should be a short-lived task

Do you have any links to these discussions?

The Task vs Microtask has no semantic difference in "size", just when they are expected to run.

The thing we need to ask ourself is if we want the Atomic.awaitSync to be able to happen before or after a page render if it is queued (or other similar tasks). Eg. should we always yield to the eventloop when we want to aquire the lock async. I would argue to only yield if the lock is blocking.

But in the end, I don't think this matters a lot compared to the big difference of removing the yielding in the critical section which this PR fixes 👍

Copy link
Collaborator Author

@juj juj Oct 29, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have any links to these discussions?

This was from

https://developer.mozilla.org/en-US/docs/Web/API/Window/queueMicrotask and https://developer.mozilla.org/en-US/docs/Web/API/HTML_DOM_API/Microtask_guide : "The microtask is a short function which will run after ... "

The thing we need to ask ourself is if we want the Atomic.awaitSync to be able to happen before or after a page render if it is queued (or other similar tasks). Eg. should we always yield to the eventloop when we want to aquire the lock async. I would argue to only yield if the lock is blocking.

I understand you're in the mindset of designing what would be the best API. And I agree with that, though given this is an already shipped API, that would change the semantics. For example, if user writes

emscripten_set_timeout(doSomething, 0 /*msecs*/, userData);
emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

Should the async timeout callback trigger first? Or the async lock acquire trigger first?

Currently the computation model is consistent to always trigger the timeout first. Using a microtask would change the async lock callback to trigger either before or after the timeout depending on whether there was contention from other threads or not.

Maybe it would be a better design for one to say "well the above should be unspecified, don't rely on it as a end-user." But given this is an already shipped API, I am very cautious to change that behavior.

It is possible to manually control this behavior with the above two constructs to get that sync functionality, so one can already get the necessary thing with a couple of extra lines without a performance penalty.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I missed the current description.
"The calling thread will asynchronously try to obtain the given lock after the calling thread yields back to the event loop"

With it being explicitly defined, I agree we should not change its behavior, compared to it being undefined.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally I think using the microtask queue should be fine here, and we could update the documentation to say "yields back to the microtask queue".

If there is code out there that is dependant on the ordering the the two callback about that seems way to fragile. Also, such code would have been broken by the existence of the current bug (i.e. the current bug this this PR fixes basically ensures that no such code exists in the wild yet, so now would be good time to switch to the microtask queue.. although that should be a followup PR I think).

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If there is code out there that is dependant on the ordering the the two callback about that seems way to fragile.

Judging "your code would be poor anyways, so it's fine to break it" might not work out too well in general.

Also, such code would have been broken by the existence of the current bug

The bug that is being fixed here is not that emscripten_lock_async_acquire() would unconditionally deadlock, but that emscripten_lock_async_acquire() acquires the lock, which prevents later synchronous locking.

In above example, user might not necessarily acquire the lock in doSomething(), but might be doing something else altogether.

the current bug this this PR fixes basically ensures that no such code exists in the wild yet

This is not correct. There is no issue with using emscripten_lock_async_acquire() by itself (and having any sorts of interaction with respect to the event loop). It is only the scenario where the calling thread might also attempt to synchronously lock the same mutex right after issueing a call to async locking.

One might also argue such behavior to be "fragile" and should not be used, since the very reason that async locking exists is to avoid busy-spinning the main loop, which is that often cited "considered harmful" behavior. I.e. this is basically fixing code that resides in the fragile code area to begin with. If we were web purists, the "proper" use of emscripten_lock_async_acquire() would not also try to separately block the main thread.


Another problem with switching to microtask is that repeatedly acquiring a lock will then cause the browser to hang, whereas with the current timeout, it will act like a setTimeout(0), pumping the event loop in between.

Ultimately though, the reason I hesitate to use microtask (even if we would decide it to be ok to change the semantics here) is that it would lead the default behavior of this API to have scheduling semantics that depend on multithreading contention. That reads really scary to me, way scarier than reasoning about busy-spinning on the main loop is.

I.e. if web folks are already arguing that main thread should not busy-spin because it is too hard to reason about wait times under contention, then I would argue that the scheduling ordering semantics should not be affected by contention, since reasoning about that is way way way harder than reasoning about wait times.

I think what we could do is in addition to the existing emscripten_request_animation_frame() API, we could complement that with functions emscripten_request_idle_callback() and emscripten_queue_microtask(). Then users would have a built-in way to write custom scheduling, e.g. with

if (emscripten_lock_try_acquire(lock))
  emscripten_queue_microtask(weHaveLock);
else
  emscripten_lock_async_acquire(lock, weHaveLock, userData, INFINITY);

and the control of scheduling would always explicitly remain with the user, and not be dictated by lock contention.

},

emscripten_semaphore_async_acquire__deps: ['$polyfillWaitAsync'],
Expand Down
9 changes: 6 additions & 3 deletions system/include/emscripten/wasm_worker.h
Original file line number Diff line number Diff line change
Expand Up @@ -185,9 +185,12 @@ void emscripten_lock_busyspin_waitinf_acquire(emscripten_lock_t *lock __attribut
// timeout parameter as int64 nanosecond units, this function takes in the wait
// timeout parameter as double millisecond units. See
// https://github.com/WebAssembly/threads/issues/175 for more information.
// NOTE: This function can be called in both main thread and in Workers. If you
// use this API in Worker, you cannot utilise an infinite loop programming
// model.
// NOTE: This function can be called in both main thread and in Workers.
// NOTE 2: This function will always acquire the lock asynchronously. That is,
// the lock will only be attempted to acquire after current control flow
// yields back to the browser, so that the Wasm call stack is empty.
// This is to guarantee an uniform control flow. If you use this API in
// a Worker, you cannot utilise an infinite loop programming model.
void emscripten_lock_async_acquire(emscripten_lock_t *lock __attribute__((nonnull)),
emscripten_async_wait_volatile_callback_t asyncWaitFinished __attribute__((nonnull)),
void *userData,
Expand Down
5 changes: 5 additions & 0 deletions test/test_browser.py
Original file line number Diff line number Diff line change
Expand Up @@ -5329,6 +5329,11 @@ def test_wasm_worker_lock_wait2(self):
def test_wasm_worker_lock_async_acquire(self):
self.btest_exit('wasm_worker/lock_async_acquire.c', cflags=['--closure=1', '-sWASM_WORKERS'])

# Tests emscripten_lock_async_acquire() function when lock is acquired both synchronously and asynchronously.
@also_with_minimal_runtime
def test_wasm_worker_lock_async_and_sync_acquire(self):
self.btest_exit('wasm_worker/lock_async_and_sync_acquire.c', cflags=['-sWASM_WORKERS'])

# Tests emscripten_lock_busyspin_wait_acquire() in Worker and main thread.
@also_with_minimal_runtime
def test_wasm_worker_lock_busyspin_wait(self):
Expand Down
25 changes: 25 additions & 0 deletions test/wasm_worker/lock_async_and_sync_acquire.c
Original file line number Diff line number Diff line change
@@ -0,0 +1,25 @@
#include <emscripten/wasm_worker.h>
#include <emscripten/threading.h>
#include <stdio.h>

emscripten_lock_t lock = EMSCRIPTEN_LOCK_T_STATIC_INITIALIZER;

void on_acquire(volatile void* address, uint32_t value,
ATOMICS_WAIT_RESULT_T waitResult, void* userData) {
printf("on_acquire: releasing lock.\n");
emscripten_lock_release(&lock);
printf("on_acquire: released lock.\n");
exit(0);
}

int main() {
printf("main: async acquiring lock.\n");
emscripten_lock_async_acquire(&lock, on_acquire, 0, 100);
printf("main: busy-spin acquiring lock.\n");
emscripten_lock_busyspin_waitinf_acquire(&lock);
printf("main: lock acquired.\n");
emscripten_lock_release(&lock);
printf("main: lock released.\n");
emscripten_exit_with_live_runtime();
return 1;
}
Loading