Replies: 2 comments 3 replies
-
Your approach looks good and - You also seem to have tested it well (although, it would be interesting to see what it does when A few suggested notes of improvement:
Eg. (warning - untested code): async def _request(self, method, url, data=None, json=None, ssl=None, params=None, headers={}):
redir_cnt = 0
if self._timeout is not None:
stop_time = time.time() + self._timeout
while redir_cnt < 2:
reader = await asyncio.wait_for(
self.request_raw(method, url, data, json, ssl, params, headers),
None if self._timeout is None else (stop_time - time.time())
)
... The first dot point should fix a potential bug. The second two are just suggested improvements, but your code works fine without them. The last one should ensure what I believe is expected behaviour when redirects are involved (but I am not 100% sure what the expected behaviour is here). |
Beta Was this translation helpful? Give feedback.
-
Thanks again, for the context and further help. I was converting to milliseconds because adding a float to time.time() does not seem to work, I think we get in to rounding errors So sticking with converting timeout to ms... I created a minimal test that assumes an internet connection, but is looking for a non-routable address, so should always timeout. This should report the response time with the timeout increasing from 0 to 2.9 seconds in 0.1seconds steps: import time
import asyncio
import aiohttp
async def test_timeout(url, timeout):
async with aiohttp.ClientSession(url, timeout=timeout) as session:
async with session.get("/get") as resp:
assert resp.status == 200
rget = await resp.text()
print(f"GET: {rget}")
for t in range(0, 30, 1):
try:
t1=time.ticks_ms()
asyncio.run(test_timeout(url="http://10.255.255.1", timeout=t/10))
print(f"reply received:{time.ticks_diff(time.ticks_ms(),t1)}")
except asyncio.TimeoutError:
print(f"timeout after:{time.ticks_diff(time.ticks_ms(),t1)}") When running this, after typically 4 iterations I get an out of memory error. I guess the requests are left open and consuming memory - though that seems like a lot of memory. Tracking this down, if I catch an asyncio.CancelledError in this approach, of wiping _request and using gc.collect() does seem somewhat hacky, but I cannot find a more elegant way of closing the failed/failing requests and freeing memory. I found it surprisingly tricky to pin down the correct place to catch the CancelledError, I suppose in hindsight I needed to find the awaitable that was actually the cause of the timeout - which is request_raw. In case anyone else is interested or can help me improve on this the modfified |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
in the
__init__.py
for the aiohttp package there is a section that reads;I am interested in implementing a timeout, partly curiosity, partly it could be useful for a current piece of work - though I would have to then build code to try and figure out why we timed out. Anyway... I have made a few small changes that seem to work for a timeout, for my use. I would be grateful of any insight as to whether this is a suitable implementation for incorporating into aiohttp, or perhaps I am not anticipating some knock-on effect?
in
__init__.py
change:to:
and below that change the _request method from that shown above to:
I can then call the session with a timeout, as per the examples;
If I wrap that session in a try block I can then trap an
except asyncio.TimeoutError
. I tested this with both small and reasonable timeout values of 0.1 and 5 seconds and it does seem to work.This change does mean that if there are redirects in place the timeout will be applied once per redirect, so up to 3 times the timeout.
Beta Was this translation helpful? Give feedback.
All reactions