You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# Got lock problem here , may cause dead lock slightly chance , don't know how to fix it.
106
+
'''
107
+
A bug may cause potential risk here.
108
+
109
+
Simply put, cause unrigister queue listening (adjust free thread count -= 1) is not atomic operation after
110
+
exception thrown, thus there's slightly chance thread swtiching (which is controled by os and not going to be
111
+
interfered by user) happens just right the time after listening stopped but before free thread count is adjusted.
112
+
113
+
At that precisely time point , if there's a new task going to be added in main thread , determine statement
114
+
in ThreadPoolExecutor._adjust_thread_count would consider mistakenly there's still enough worker listening ,
115
+
and decide not to generate a new thread. That may eventually cause a final result of task added in queue
116
+
but no worker takes it out in the meantime. The last task will never run if there's no new tasks trigger ThreadPoolExecutor._adjust_thread_count afterwards.
117
+
118
+
To fix this problem , the most reasonable , yes high cost , solution would be hacking into cpython's queue
119
+
module (which was a .pyd file projected to 'Modules/_queuemodule.c' in source code) , makes
120
+
self._executor._free_thread_count adjusted before listening suspended in interrupt of timeout's callback
121
+
functions.
122
+
123
+
If that sounds a little bit hard to implement , the alternatively simplified approach would be preserving
124
+
serveral 'core thread' which would still looping in timeout but never halt , with the same quantity as
125
+
ThreadPoolExecutor.min_workers.
126
+
127
+
Based on this implementation , the status refresh not in time bug will still occur whereas there's always
128
+
sub-threads working to trigger task out of task queue. This may cause sub-threads block , which was slightly
129
+
out of line with expectations, at some paticular situation if existing threads occasionally full loaded.
130
+
With another potential symptom is , depends on which task was last triggered by system, size of thread
131
+
pool may not be precisely the same as expect just right after thread shrink happened.
0 commit comments