Skip to content

fix: Cancel _target task in AsyncWorker.kill() and improve sync close()

c46fb6f
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Open

feat: Add experimental async transport (port of PR #4572) #5646

fix: Cancel _target task in AsyncWorker.kill() and improve sync close()
c46fb6f
Select commit
Loading
Failed to load commit list.
GitHub Actions / warden completed Mar 12, 2026 in 17m 10s

10 issues

High

__aexit__ silently fails to close sync-transport clients - `sentry_sdk/client.py:1144-1145`

When async with Client(...) is used with a sync transport (the default without transport_async experiment), __aexit__ calls close_async(), which returns early without any cleanup at line 1073. This leaves session flusher, batchers, monitor, and transport resources unclosed. Users may reasonably use async with syntax in async code even without the async transport experiment, expecting proper cleanup.

Test test_loop_close_flushes_async_transport will always fail due to isinstance check on Mock object - `tests/integrations/asyncio/test_asyncio.py:665-670`

The test creates mock_transport = Mock(spec=AsyncHttpTransport) but the production code patch_loop_close() uses isinstance(client.transport, AsyncHttpTransport) to check if the transport is async. A Mock object with spec= does NOT satisfy isinstance checks - it only makes the mock have the same attributes/methods. Therefore, _flush() returns early at line 71-72 without calling close_async(), causing assert_called_once() to fail. The test needs to patch the isinstance check or use a real AsyncHttpTransport instance.

Medium

AsyncHttpTransport ignores keep_alive configuration option - `sentry_sdk/transport.py:888-897`

The AsyncHttpTransport._get_pool_options() method always applies keep-alive socket options regardless of the keep_alive configuration setting. This differs from HttpTransport._get_pool_options() which only applies them when self.options["keep_alive"] is True. Users who explicitly set keep_alive=False will find their setting ignored when using the async transport.

AsyncWorker queue not reset on kill() causes restart failure - `sentry_sdk/worker.py:220-231`

The kill() method puts a _TERMINATOR in the queue (line 220) but does not reset self._queue to None. When start() is called later, it checks if self._queue is None (line 237) and reuses the existing queue containing the terminator. The new _target() task will immediately read the terminator and exit, preventing the worker from processing any new items after restart.

Test will not verify close_async is called due to isinstance check failing on Mock object - `tests/integrations/asyncio/test_asyncio.py:662-663`

The test creates mock_transport = Mock(spec=AsyncHttpTransport) but in patch_loop_close._flush(), there's an isinstance(client.transport, AsyncHttpTransport) check (line 71 in asyncio.py). Mock(spec=...) creates a mock that mimics the interface but does not pass isinstance() checks - it's still a Mock instance. This causes _flush() to return early at line 72 without calling client.close_async(), making the test assertions pass vacuously if the mock is simply never called, or fail to catch regressions.

httpcore[asyncio] dependency will fail for Python 3.6 and 3.7 common tests - `tox.ini:340`

The common: httpcore[asyncio] dependency is added without a Python version constraint, but httpcore 1.x requires Python 3.8+. The common test environment includes py3.6 and py3.7 (line 21: {py3.6,py3.7,...}-common), so these tests will fail during dependency installation. This should use a version-qualified selector like {py3.8,...}-common: httpcore[asyncio] similar to how other version-specific dependencies are handled (e.g., line 338: py3.8-common: hypothesis).

SOCKS proxy failure silently creates direct connection instead of intended fallback - `sentry_sdk/transport.py:952-956`

When httpcore.AsyncSOCKSProxy() raises a RuntimeError (indicating SOCKS support is not installed), the exception is caught and a warning is logged about 'Disabling proxy support', but the code then falls through without returning, causing execution to continue to line 960 which creates an AsyncConnectionPool without proxy settings. This bypasses the user's intended proxy configuration entirely, potentially leaking traffic that was meant to go through a SOCKS proxy - a security/privacy concern if the proxy was being used for anonymity or network isolation.

AsyncWorker.kill() leaves terminator in queue, causing immediate shutdown on restart - `sentry_sdk/worker.py:29-31`

In AsyncWorker.kill(), a _TERMINATOR is added to the queue (line 220) but the queue reference is never cleared (self._queue = None is missing). If start() is called again, the old queue is reused (line 237 check fails), and the new consumer task will immediately process the stale terminator and exit. This makes the worker non-restartable without creating a new instance.

Low

Test does not clean up global scope client, potentially causing test isolation issues - `tests/test_transport.py:1074`

The test_async_transport_concurrent_requests test sets the global scope client via sentry_sdk.get_global_scope().set_client(client) but does not register a finalizer to reset it. This can cause state leakage between tests, leading to flaky or order-dependent test failures. Compare with test_async_transport_rate_limiting_with_concurrency which properly uses request.addfinalizer(lambda: sentry_sdk.get_global_scope().set_client(None)).

Missing space in test assertion error message creates malformed output - `tests/test_client.py:1905-1906`

The f-string assertion message on lines 1905-1906 is missing a space between the two string parts. When should_be_socks_proxy is True/False and the assertion fails, the message will read "SOCKS == Truebut got" or "SOCKS == Falsebut got" instead of "SOCKS == True but got" or "SOCKS == False but got". This makes test failure messages harder to read.

4 skills analyzed
Skill Findings Duration Cost
code-review 6 8m 17s $7.48
find-bugs 4 16m 23s $14.36
skill-scanner 0 8m 39s $1.77
security-review 0 17m 2s $2.35

Duration: 50m 21s · Tokens: 19.6M in / 167.4k out · Cost: $26.02 (+extraction: $0.01, +merge: $0.00, +fix_gate: $0.02, +dedup: $0.02)