-
-
Notifications
You must be signed in to change notification settings - Fork 33.2k
Add asyncio.Executor matching concurrent.futures.Executor #129769
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 1 commit
61ed965
9355bc1
a429292
9aaaf28
6d195ac
d31e7ce
15e21d1
3317ab8
b45fc1a
454c0b1
bc284b7
09a9970
0fdc541
7ea2a6b
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change | ||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| @@ -0,0 +1,226 @@ | ||||||||||||||
| import time | ||||||||||||||
| from inspect import isawaitable | ||||||||||||||
| from typing import (Any, AsyncIterable, Awaitable, Iterable, NamedTuple, | ||||||||||||||
| Optional, Protocol, cast) | ||||||||||||||
|
|
||||||||||||||
| from .exceptions import CancelledError | ||||||||||||||
| from .futures import Future | ||||||||||||||
| from .locks import Event | ||||||||||||||
| from .queues import Queue, QueueShutDown | ||||||||||||||
| from .tasks import FIRST_COMPLETED, Task, create_task, gather, wait, wait_for | ||||||||||||||
|
|
||||||||||||||
| __all__ = ( | ||||||||||||||
| "Executor", | ||||||||||||||
| ) | ||||||||||||||
|
|
||||||||||||||
|
|
||||||||||||||
| class _WorkFunction[R](Protocol): | ||||||||||||||
| def __call__(self, *args, **kwargs) -> R | Awaitable[R]: | ||||||||||||||
| ... | ||||||||||||||
|
||||||||||||||
| class _WorkFunction[R](Protocol): | |
| def __call__(self, *args, **kwargs) -> R | Awaitable[R]: | |
| ... | |
| class _WorkFunction[R, **P](Protocol): | |
| def __call__(self, *args: P.args, **kwargs: P.kwargs) -> R | Awaitable[R]: | |
| ... |
If the module has type hints -- let's use it constantly everywhere
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use a dataclass instead of named tuple.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm sorry, I disagree with accepting two function colors.
In my mind, async executor should work with async functions only.
Processing sync functions without blocking calls inside don't make sense; they could be called without an executor.
Handling sync functions with blocking calls requires a to_thread() wrapper; it is not a subject of the proposed class.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make the function a private method of Executor.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How could it be None?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
None was used as a terminator, however, I see now that shutting the queue down works too. I've refactored it, thank you for pointing this out.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if the worker should create a task per processed item.
It looks like an overhead now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand, however, I think create_task is needed if I want to race fn against possible cancellation of the work item, and cancel fn's execution if that happens.
If I were to remove create_task and just await fn(...), it will prevent immediate cancellation of long-running work items, which could be undesirable for the users, e.g. if it would occur while leaving an Executor's context due to a raised exception.
Do you know of a way to achieve this behavior without using create_task?
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| continue | |
| if not item_future.cancelled(): | |
| item_future.set_result(result) | |
| else: | |
| if not item_future.cancelled(): | |
| item_future.set_result(result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After some tinkering, I managed to made the code more compact, but the new version cannot use try-except-else if it is to remain compact.
Please review the new version and consider which way you would prefer.
For easier comparison, here's a try-except-else version:
async def _worker(self) -> None:
while True:
try:
work_item = await self._input_queue.get()
item_future = work_item.future
try:
if item_future.cancelled():
continue
task = create_task(work_item.fn(
*work_item.args,
**work_item.kwargs,
))
await wait([task, item_future], return_when=FIRST_COMPLETED)
if item_future.cancelled():
task.cancel()
continue
result = task.result()
except BaseException as exception:
if not item_future.cancelled():
item_future.set_exception(exception)
else:
if not item_future.cancelled():
item_future.set_result(result)
finally:
self._input_queue.task_done()
except QueueShutDown: # The executor has been shut down.
break
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| items = await gather(*[anext(iterator) for iterator in iterators]) | |
| items =[anext(iterator) for iterator in iterators] |
await gather() is not for free; it creates a task per item.
Let's use plain list comprehension here.
If a user wants concurrency, he could create a task explicitly providing arguments.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about context vars?
Now, workers work with a context implicitly copied during the Executor creation.
How could I submit a function with the context that I have at the moment of submit() call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While I have no strong opinions about context vars, I do see they can be useful.
I suggest adding a context: Context | None = None parameter to submit() and map() that will be set to contextvars.copy_context() if None, and propagating the context in _WorkItem to be used inside a worker when executing fn.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This argument is never used; please drop it.
asyncio.Executor is not 100% compatible with concurrent.futures.Executor anyway.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the function could be extracted into a method.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please replace wait_for() with async with timeout().
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the code waits for all futures cancellations, there is no need to create a task per future.
for loop with waiting futures one by one could be enough; it doesn't wait the last future longer than the current code.
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could these three lines be reduced to bare self._input_queue.shutdown()?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could a worker raise an exception? Should .shutdown() propagate the exception to a caller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unless there is a bug in the implementation, a worker should never raise an exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
AsyncIterable,Iterable,Awaitablelive incollections.abcnow,Optional[T]should be replaced withT | None.