-
Notifications
You must be signed in to change notification settings - Fork 34
Implement ActionQueue for batching actions in OverkizClient #1866
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: v2/main
Are you sure you want to change the base?
Conversation
9b2bc1e to
7ee723f
Compare
fc25ecc to
fb87f9e
Compare
* Bump minimum required Python version to 3.12 * Update CI/CD
…oup for Execution typing (#1864) * Rename scenario to action group * Rename test * Fix docstring formatting in ActionGroup class * Remove unnecessary assertions in ActionGroup initialization
78fedac to
d205616
Compare
…methods. (#1862) ## Breaking changes - `client.execute_command()` and `client.execute_commands()` are replaced by `client.execute_action_group()` ## Enhancements - `client.execute_action_group()` now supports multiple execution modes (high priority, internal, geolocated) - `client.execute_action_group()` now supports multiple device actions in the same request ## Proposal The current execution methods are poorly typed and do not support concurrent execution across multiple devices, which makes it impossible to properly work around TooManyExecutionsException and TooManyConcurrentRequestsException. The main change is the move from `client.execute_command()` and `client.execute_commands()` to a single `client.execute_action_group()`. An action group takes a list of actions, each of which can include multiple device actions, including multiple commands per action. #### Method examples ```python3 await client.execute_action_group( actions=[ Action( device_url="io://1234-5678-1234/12345678", commands=[ Command(name="down"), Command(name="refresh") ] ) ], label="Execution via Home Assistant" ) ``` New (mode) options like high priority will be possible now: ```python3 await client.execute_action_group( actions=[ Action( device_url="io://1234-5678-1234/12345678", commands=[ Command(name=OverkizCommand.SET_CLOSURE, parameters=[0]) ] ) ], label="Execution via Home Assistant", mode=CommandMode.HIGH_PRIORITY ) ``` ### Next steps This could serve as a foundation for grouping commands that are executed within a short time window, for example when triggered by a scene or automation in Home Assistant. Requests issued close together could be batched and sent as a single action group, reducing the impact of current Overkiz limitations. The open question is where this queue should live: inside this integration itself, or as part of the Home Assistant core implementation. --------- Co-authored-by: Copilot <[email protected]>
47f32f7 to
a5bfb4b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR implements an optional ActionQueue feature for the OverkizClient to address Overkiz API concurrency limits by batching multiple action executions into single API calls within a configurable time window.
Key Changes:
- Introduces
ActionQueueandQueuedExecutionclasses for batching action executions - Adds optional queue configuration to
OverkizClient(disabled by default for backward compatibility) - Modifies
execute_action_groupto support both immediate and queued execution modes
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 19 comments.
Show a summary per file
| File | Description |
|---|---|
pyoverkiz/action_queue.py |
New module implementing ActionQueue for batching actions and QueuedExecution for awaitable results |
pyoverkiz/client.py |
Integrates ActionQueue with new constructor parameters, modified execute_action_group method, and helper methods for queue management |
pyoverkiz/models.py |
Expands Command.name type to accept str or OverkizCommand |
tests/test_action_queue.py |
Comprehensive unit tests for ActionQueue functionality including batching, flushing, and error handling |
tests/test_client_queue_integration.py |
Integration tests validating queue behavior with OverkizClient |
test_queue_example.py |
Demonstration script showing queue feature usage patterns |
Comments suppressed due to low confidence (1)
pyoverkiz/models.py:479
- The Command.name field type has been changed from OverkizCommand to str | OverkizCommand, but the init method still expects only OverkizCommand. This inconsistency could lead to type checking issues. The init signature should be updated to accept str | OverkizCommand to match the field type definition, or the field type should be reverted if string support is not needed.
name: str | OverkizCommand
parameters: list[str | int | float | OverkizCommandParam] | None
def __init__(
self,
name: OverkizCommand,
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
|
@copilot please look extensively at all review comments and address them |
…ial classes (#1867) - Introduced UsernamePasswordCredentials and LocalTokenCredentials for better credential management. - Updated OverkizClient to utilize the new credential classes and refactored login logic. - Added authentication strategies for various servers, including Somfy and Rexel. - Created new modules for auth strategies and credentials to improve code organization. - Enhanced README with updated usage examples for the new authentication methods. ## Breaking - `OverkizServer` class is renamed to `ServerConfig` and has additional `server` and `type` (cloud/local) property - `generate_local_server` is renamed to `create_local_server_config` - `client.api_type` is removed and now available via `ServerConfig` (e.g. `client.server_config.type`) - The `OverkizClient` constructor now requires passing a `ServerConfig` via `server` - The `OverkizClient` constructur now requires passing an `Credentials` class via `credentials`, e.g. `UsernamePasswordCredentials(USERNAME, PASSWORD)` for most server. ## Features - The OverkizClient constructor now supports passing a Server enum directly, such as `OverkizClient(server=Server.SOMFY_EUROPE, ...)`. --------- Co-authored-by: Copilot <[email protected]> Co-authored-by: iMicknl <[email protected]>
…onsistency and clarity
…er and enhance example script with type hints
…#1876) - [x] Update integration tests to match the new simplified API - Updated test_client_with_queue_batches_actions to use asyncio.create_task for concurrent execution - Updated test_client_manual_flush to start execution as a task before checking pending count - Updated test_client_close_flushes_queue to work with str return type instead of QueuedExecution - Updated test_client_queue_respects_max_actions to handle direct exec_id returns - All 15 queue-related tests now passing (10 unit tests + 5 integration tests) <!-- START COPILOT CODING AGENT TIPS --> --- ✨ Let Copilot coding agent [set things up for you](https://github.com/iMicknl/python-overkiz-api/issues/new?title=✨+Set+up+Copilot+instructions&body=Configure%20instructions%20for%20this%20repository%20as%20documented%20in%20%5BBest%20practices%20for%20Copilot%20coding%20agent%20in%20your%20repository%5D%28https://gh.io/copilot-coding-agent-tips%29%2E%0A%0A%3COnboard%20this%20repo%3E&assignees=copilot) — coding agent works faster and does higher quality work when set up for your repo. --------- Co-authored-by: copilot-swe-agent[bot] <[email protected]> Co-authored-by: iMicknl <[email protected]>
dab4dcb to
85aa600
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 14 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| except Exception as exc: | ||
| # Propagate exception to all waiters | ||
| for waiter in waiters: | ||
| waiter.set_exception(exc) | ||
| raise |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
After propagating the exception to all waiters, the exception is re-raised. However, this might cause issues because if flush() or shutdown() calls this method, the exception will bubble up and potentially interrupt the cleanup process. Consider whether re-raising is necessary here, or if it should be handled differently when called from shutdown/flush contexts to ensure graceful cleanup.
| if action_queue_delay <= 0: | ||
| raise ValueError("action_queue_delay must be positive") | ||
| if action_queue_max_actions < 1: | ||
| raise ValueError("action_queue_max_actions must be at least 1") |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message should include the actual value that was provided to help with debugging. Consider changing to "action_queue_max_actions must be at least 1, got {action_queue_max_actions}".
| raise ValueError("action_queue_max_actions must be at least 1") | |
| raise ValueError( | |
| f"action_queue_max_actions must be at least 1, got {action_queue_max_actions}" | |
| ) |
| """Constructor. | ||
| :param server: ServerConfig | ||
| :param session: optional ClientSession | ||
| :param action_queue_enabled: enable action batching queue (default False) | ||
| :param action_queue_delay: seconds to wait before flushing queue (default 0.5) | ||
| :param action_queue_max_actions: maximum actions per batch before auto-flush (default 20) | ||
| """ |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The docstring is missing documentation for the credentials and verify_ssl parameters. These parameters should be documented to provide complete information about all constructor parameters.
| # Initialize action queue if enabled | ||
| if action_queue_enabled: | ||
| if action_queue_delay <= 0: | ||
| raise ValueError("action_queue_delay must be positive") |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error message should include the actual value that was provided to help with debugging. Consider changing to "action_queue_delay must be positive, got {action_queue_delay}".
| raise ValueError("action_queue_delay must be positive") | |
| raise ValueError(f"action_queue_delay must be positive, got {action_queue_delay}") |
| # Execute the first batch if it exists, then the second | ||
| if batch_to_execute: | ||
| await self._execute_batch(*batch_to_execute) | ||
| batch_to_execute = new_batch |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Executing a batch while holding the lock at line 121 could cause long lock hold times and potential deadlocks. The batch execution involves an async network call via _executor, which could take significant time. This blocks all other queue operations during execution.
Consider refactoring to execute both batches outside the lock. You could collect both batches to execute and then execute them sequentially after releasing the lock.
…ernamePasswordCredentials for improved clarity and consistency
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
Copilot reviewed 6 out of 6 changed files in this pull request and generated 3 comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| raise ValueError("action_queue_delay must be positive") | ||
| if action_queue_max_actions < 1: | ||
| raise ValueError("action_queue_max_actions must be at least 1") |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The error messages should specify the actual invalid values that were provided to help users debug configuration issues. For example, "action_queue_delay must be positive, got {action_queue_delay}" would be more helpful.
| raise ValueError("action_queue_delay must be positive") | |
| if action_queue_max_actions < 1: | |
| raise ValueError("action_queue_max_actions must be at least 1") | |
| raise ValueError( | |
| f"action_queue_delay must be positive, got {action_queue_delay!r}" | |
| ) | |
| if action_queue_max_actions < 1: | |
| raise ValueError( | |
| "action_queue_max_actions must be at least 1, " | |
| f"got {action_queue_max_actions!r}" | |
| ) |
| self._future: asyncio.Future[str] = asyncio.Future() | ||
|
|
||
| def set_result(self, exec_id: str) -> None: | ||
| """Set the execution ID result.""" | ||
| if not self._future.done(): | ||
| self._future.set_result(exec_id) | ||
|
|
||
| def set_exception(self, exception: BaseException) -> None: | ||
| """Set an exception if the batch execution failed.""" | ||
| if not self._future.done(): | ||
| self._future.set_exception(exception) | ||
|
|
||
| def is_done(self) -> bool: | ||
| """Check if the execution has completed (either with result or exception).""" | ||
| return self._future.done() | ||
|
|
||
| def __await__(self): | ||
| """Make this awaitable.""" | ||
| return self._future.__await__() |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Creating asyncio.Future() in init (a synchronous method) can cause issues. The Future should be created lazily when first needed in an async context. Consider creating the future in the add() method the first time it's called, or switching to using asyncio.Event and managing the result separately.
| self._future: asyncio.Future[str] = asyncio.Future() | |
| def set_result(self, exec_id: str) -> None: | |
| """Set the execution ID result.""" | |
| if not self._future.done(): | |
| self._future.set_result(exec_id) | |
| def set_exception(self, exception: BaseException) -> None: | |
| """Set an exception if the batch execution failed.""" | |
| if not self._future.done(): | |
| self._future.set_exception(exception) | |
| def is_done(self) -> bool: | |
| """Check if the execution has completed (either with result or exception).""" | |
| return self._future.done() | |
| def __await__(self): | |
| """Make this awaitable.""" | |
| return self._future.__await__() | |
| self._future: asyncio.Future[str] | None = None | |
| def _ensure_future(self) -> asyncio.Future[str]: | |
| """Create the underlying future lazily, bound to the running event loop.""" | |
| if self._future is None: | |
| loop = asyncio.get_running_loop() | |
| self._future = loop.create_future() | |
| return self._future | |
| def set_result(self, exec_id: str) -> None: | |
| """Set the execution ID result.""" | |
| future = self._ensure_future() | |
| if not future.done(): | |
| future.set_result(exec_id) | |
| def set_exception(self, exception: BaseException) -> None: | |
| """Set an exception if the batch execution failed.""" | |
| future = self._ensure_future() | |
| if not future.done(): | |
| future.set_exception(exception) | |
| def is_done(self) -> bool: | |
| """Check if the execution has completed (either with result or exception).""" | |
| return self._future.done() if self._future is not None else False | |
| def __await__(self): | |
| """Make this awaitable.""" | |
| return self._ensure_future().__await__() |
| def get_pending_count(self) -> int: | ||
| """Get the (approximate) number of actions currently waiting in the queue. | ||
| This method does not acquire the internal lock and therefore returns a | ||
| best-effort snapshot that may be slightly out of date if the queue is | ||
| being modified concurrently by other coroutines. | ||
| """ | ||
| return len(self._pending_actions) |
Copilot
AI
Jan 3, 2026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
While the documentation acknowledges this is a best-effort read, accessing self._pending_actions without the lock in a concurrent environment can lead to subtle bugs in Python due to list operations not being atomic. Consider acquiring the lock briefly to get an accurate count, or document more clearly that this method may return stale data and should not be relied upon for critical logic.
Fixes #1865