You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This package provides you a duck typing of concurrent.futures.ThreadPoolExecutor , which has the very similar api and could fully replace ThreadPoolExecutor in your code.
9
+
10
+
The reason why this pack exists is we would like to solve several specific pain spot in native python library of memory control.
11
+
12
+
## Feature
13
+
- Fully replaceable with concurrent.futures.ThreadPoolExecutor , for example in asyncio.
14
+
- Whenever submit a new task , executor will perfer to use existing idle thread rather than create a new one.
15
+
- Executor will automatically shrink itself duriung leisure time in order to achieve less memory and higher efficiency.
16
+
17
+
## Install
18
+
19
+
pip install ThreadPoolExecutorPlus
20
+
21
+
## Usage
22
+
Same api as concurrent.futures.ThreadPoolExecutor , with some more control function added.
In order to guarantee same api interface , new features should be modfied after object created.
26
+
Could change minimum/maximum activate worker num , and set after how many seconds will the idle thread terminated.
27
+
By default , min_workers = 4 , max_workers = 256 on windows and 512 on linux , keep_alive_time = 100s.
28
+
29
+
30
+
## Example
31
+
32
+
Very the same code in official doc [https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example](https://docs.python.org/3/library/concurrent.futures.html#threadpoolexecutor-example) , with executor replaced:
33
+
```Python3
34
+
# requests_test.py
35
+
import concurrent.futures
36
+
import ThreadPoolExecutorPlus
37
+
import urllib.request
38
+
39
+
URLS= ['http://www.foxnews.com/',
40
+
'http://www.cnn.com/',
41
+
'http://europe.wsj.com/',
42
+
'http://www.bbc.co.uk/',
43
+
'http://some-made-up-domain.com/']
44
+
45
+
defload_url(url, timeout):
46
+
with urllib.request.urlopen(url, timeout=timeout) as conn:
47
+
return conn.read()
48
+
49
+
with ThreadPoolExecutorPlus.ThreadPoolExecutor(max_workers=5) as executor:
future_to_url = {executor.submit(load_url, url, 60): url for url inURLS}
53
+
for future in concurrent.futures.as_completed(future_to_url):
54
+
url = future_to_url[future]
55
+
try:
56
+
data = future.result()
57
+
exceptExceptionas exc:
58
+
print('%r generated an exception: %s'% (url, exc))
59
+
else:
60
+
print('%r page is %d bytes'% (url, len(data)))
61
+
```
62
+
63
+
Same code in [https://docs.python.org/3/library/asyncio-eventloop.html?highlight=asyncio%20run_in_executor#executing-code-in-thread-or-process-pools](https://docs.python.org/3/library/asyncio-eventloop.html?highlight=asyncio%20run_in_executor#executing-code-in-thread-or-process-pools) with executor replaced:
64
+
```Python3
65
+
# Runs on python version above 3.7
66
+
import asyncio
67
+
import concurrent.futures
68
+
import ThreadPoolExecutorPlus
69
+
70
+
defblocking_io():
71
+
withopen('/dev/urandom', 'rb') as f:
72
+
return f.read(100)
73
+
74
+
defcpu_bound():
75
+
returnsum(i * i for i inrange(10**7))
76
+
77
+
asyncdefmain():
78
+
loop = asyncio.get_running_loop()
79
+
80
+
with ThreadPoolExecutorPlus.ThreadPoolExecutor() as pool:
0 commit comments